WorldWideScience

Sample records for warehouse automation improves

  1. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  2. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  3. Opportunities for Energy Efficiency and Automated Demand Response in Industrial Refrigerated Warehouses in California

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Thompson, Lisa; McKane, Aimee; Rockoff, Alexandra; Piette, Mary Ann

    2009-05-11

    This report summarizes the Lawrence Berkeley National Laboratory's research to date in characterizing energy efficiency and open automated demand response opportunities for industrial refrigerated warehouses in California. The report describes refrigerated warehouses characteristics, energy use and demand, and control systems. It also discusses energy efficiency and open automated demand response opportunities and provides analysis results from three demand response studies. In addition, several energy efficiency, load management, and demand response case studies are provided for refrigerated warehouses. This study shows that refrigerated warehouses can be excellent candidates for open automated demand response and that facilities which have implemented energy efficiency measures and have centralized control systems are well-suited to shift or shed electrical loads in response to financial incentives, utility bill savings, and/or opportunities to enhance reliability of service. Control technologies installed for energy efficiency and load management purposes can often be adapted for open automated demand response (OpenADR) at little additional cost. These improved controls may prepare facilities to be more receptive to OpenADR due to both increased confidence in the opportunities for controlling energy cost/use and access to the real-time data.

  4. Automated Data Aggregation for Time-Series Analysis: Study Case on Anaesthesia Data Warehouse.

    Science.gov (United States)

    Lamer, Antoine; Jeanne, Mathieu; Ficheur, Grégoire; Marcilly, Romaric

    2016-01-01

    Data stored in operational databases are not reusable directly. Aggregation modules are necessary to facilitate secondary use. They decrease volume of data while increasing the number of available information. In this paper, we present four automated engines of aggregation, integrated into an anaesthesia data warehouse. Four instances of clinical questions illustrate the use of those engines for various improvements of quality of care: duration of procedure, drug administration, assessment of hypotension and its related treatment.

  5. Warehouse Performance Improvement at Linfox Logistics Indonesia

    OpenAIRE

    Pratama, Riyan Galuh; Simatupang, Togar M

    2013-01-01

    The objective of this research is to provide alternative solutions for Linfox Logistics Indonesia (LLI) in facing warehouse performance issues. The main warehouse performance indicators called Customer Case Filling on Time (CCFOT) and Case Picking Productivity failed to achieve the target. Several analyses were carried out regarding current dispatch process, value stream mapping, and root causes identification. The results find that much waste occurred in dispatch process. Proposed improvemen...

  6. An efficiency improvement in warehouse operation using simulation analysis

    Science.gov (United States)

    Samattapapong, N.

    2017-11-01

    In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.

  7. Automation of pharmaceutical warehouse using groups robots with remote climate control and video surveillance

    OpenAIRE

    Zhuravska, I. M.; Popel, M. I.

    2015-01-01

    In this paper, we present a complex solution for automation pharmaceutical warehouse, including the implementation of climate-control, video surveillance with remote access to video, robotics selection of medicine with the optimization of the robot motion. We describe all the elements of local area network (LAN) necessary to solve all these problems.

  8. Routing Optimization of Intelligent Vehicle in Automated Warehouse

    Directory of Open Access Journals (Sweden)

    Yan-cong Zhou

    2014-01-01

    Full Text Available Routing optimization is a key technology in the intelligent warehouse logistics. In order to get an optimal route for warehouse intelligent vehicle, routing optimization in complex global dynamic environment is studied. A new evolutionary ant colony algorithm based on RFID and knowledge-refinement is proposed. The new algorithm gets environmental information timely through the RFID technology and updates the environment map at the same time. It adopts elite ant kept, fallback, and pheromones limitation adjustment strategy. The current optimal route in population space is optimized based on experiential knowledge. The experimental results show that the new algorithm has higher convergence speed and can jump out the U-type or V-type obstacle traps easily. It can also find the global optimal route or approximate optimal one with higher probability in the complex dynamic environment. The new algorithm is proved feasible and effective by simulation results.

  9. Automated Creation of Datamarts from a Clinical Data Warehouse, Driven by an Active Metadata Repository

    Science.gov (United States)

    Rogerson, Charles L.; Kohlmiller, Paul H.; Stutman, Harris

    1998-01-01

    A methodology and toolkit are described which enable the automated metadata-driven creation of datamarts from clinical data warehouses. The software uses schema-to-schema transformation driven by an active metadata repository. Tools for assessing datamart data quality are described, as well as methods for assessing the feasibility of implementing specific datamarts. A methodology for data remediation and the re-engineering of operational data capture is described.

  10. Improving warehouse responsiveness by job priority management : A European distribution centre field study

    NARCIS (Netherlands)

    T.Y. Kim (Thai Young)

    2018-01-01

    textabstractWarehouses employ order cut-off times to ensure sufficient time for fulfilment. To satisfy higher consumer expectations, these cut-off times are gradually postponed to improve order responsiveness. Warehouses therefore have to allocate jobs more efficiently to meet compressed response

  11. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  12. Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives.

    Science.gov (United States)

    Chelico, John D; Wilcox, Adam B; Vawdrey, David K; Kuperman, Gilad J

    2016-01-01

    Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement.

  13. Development of Auto-Stacking Warehouse Truck

    Directory of Open Access Journals (Sweden)

    Kuo-Hsien Hsia

    2018-03-01

    Full Text Available Warehouse automation is a very important issue for the promotion of traditional industries. For the production of larger and stackable products, it is usually necessary to operate a fork-lifter for the stacking and storage of the products by a skilled person. The general autonomous warehouse-truck does not have the ability of stacking objects. In this paper, we develop a prototype of auto-stacking warehouse-truck that can work without direct operation by a skill person. With command made by an RFID card, the stacker truck can take the packaged product to the warehouse on the prior-planned route and store it in a stacking way in the designated storage area, or deliver the product to the shipping area or into the container from the storage area. It can significantly reduce the manpower requirements of the skilled-person of forklift technician and improve the safety of the warehousing area.

  14. The Pediatrix BabySteps® Data Warehouse--a unique national resource for improving outcomes for neonates.

    Science.gov (United States)

    Spitzer, Alan R; Ellsbury, Dan; Clark, Reese H

    2015-01-01

    The Pediatrix Medical Group Clinical Data Warehouse represents a unique electronic data capture system for the assessment of outcomes, the management of quality improvement (CQI) initiatives, and the resolution of important research questions in the neonatal intensive care unit (NICU). This system is described in detail and the manner in which the Data Warehouse has been used to measure and improve patient outcomes through CQI projects and research is outlined. The Pediatrix Data Warehouse now contains more than 1 million patients, serving as an exceptional tool for evaluating NICU care. Examples are provided of how significant outcome improvement has been achieved and several papers are cited that have used the "Big Data" contained in the Data Warehouse for novel observations that could not be made otherwise.

  15. Warehousing performance improvement using Frazelle Model and per group benchmarking: A case study in retail warehouse in Yogyakarta and Central Java

    Directory of Open Access Journals (Sweden)

    Kusrini Elisa

    2018-01-01

    Full Text Available Warehouse performance management has an important role in improving logistic's business activities. Good warehouse management could increase profit, time delivery, quality and customer service. This study is conducted to assess performance of retail warehouses in some supermarket located in Central Java and Yogyakarta. Performance improvement is proposed base on the warehouse measurement using Frazelle model (2002, that measure on five indicators, namely Financial, Productivity, Utility, Quality and Cycle time along five business process in warehousing, i.e. Receiving, Put Away, Storage, Order picking and shipping. In order to obtain more precise performance, the indicators are weighted using Analytic Hierarchy Analysis (AHP method. Then, warehouse performance are measured and final score is determined using SNORM method. From this study, it is found the final score of each warehouse and opportunity to improve warehouse performance using peer group benchmarking

  16. Solutions for improving data extraction from virtual data warehouses

    Directory of Open Access Journals (Sweden)

    Adela BARA

    2010-09-01

    Full Text Available The data warehousing project’s team is always confronted with low performance in data extraction. In a Business Intelligence environment this problem can be critical because the data displayed are no longer available for taking decisions, so the project can be compromised. In this case there are several techniques that can be applied to reduce queries’ execution time and to improve the performance of the BI analyses and reports. Some of the techniques that can be applied to reduce the cost of execution for improving query performance in BI systems will be presented in this paper.

  17. Automated population of an i2b2 clinical data warehouse from an openEHR-based data repository.

    Science.gov (United States)

    Haarbrandt, Birger; Tute, Erik; Marschollek, Michael

    2016-10-01

    Detailed Clinical Model (DCM) approaches have recently seen wider adoption. More specifically, openEHR-based application systems are now used in production in several countries, serving diverse fields of application such as health information exchange, clinical registries and electronic medical record systems. However, approaches to efficiently provide openEHR data to researchers for secondary use have not yet been investigated or established. We developed an approach to automatically load openEHR data instances into the open source clinical data warehouse i2b2. We evaluated query capabilities and the performance of this approach in the context of the Hanover Medical School Translational Research Framework (HaMSTR), an openEHR-based data repository. Automated creation of i2b2 ontologies from archetypes and templates and the integration of openEHR data instances from 903 patients of a paediatric intensive care unit has been achieved. In total, it took an average of ∼2527s to create 2.311.624 facts from 141.917 XML documents. Using the imported data, we conducted sample queries to compare the performance with two openEHR systems and to investigate if this representation of data is feasible to support cohort identification and record level data extraction. We found the automated population of an i2b2 clinical data warehouse to be a feasible approach to make openEHR data instances available for secondary use. Such an approach can facilitate timely provision of clinical data to researchers. It complements analytics based on the Archetype Query Language by allowing querying on both, legacy clinical data sources and openEHR data instances at the same time and by providing an easy-to-use query interface. However, due to different levels of expressiveness in the data models, not all semantics could be preserved during the ETL process. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Improving Warehouse Inventory Management Through Rfid, Barcoding and Robotics Technologies

    Science.gov (United States)

    2014-12-01

    analysis and feasibility study on the use of RFID technology in the DLA’s distribution centers to improve inventory management and order processing . In...efficiency in receipt, order processing , and distribution for the DLA. 3 II. BACKGROUND A. SUPPLY CHAIN MANAGEMENT IN THE DEPARTMENT OF DEFENSE...of equipment and supplies. It experiences multiple challenges in inventory management and order processing , resulting in high associated costs. The

  19. Fire detection in warehouse facilities

    CERN Document Server

    Dinaburg, Joshua

    2013-01-01

    Automatic sprinklers systems are the primary fire protection system in warehouse and storage facilities. The effectiveness of this strategy has come into question due to the challenges presented by modern warehouse facilities, including increased storage heights and areas, automated storage retrieval systems (ASRS), limitations on water supplies, and changes in firefighting strategies. The application of fire detection devices used to provide early warning and notification of incipient warehouse fire events is being considered as a component of modern warehouse fire protection.Fire Detection i

  20. Business intelligence and data warehouse programs in higher education institutions: current status and recommendations for improvement

    Directory of Open Access Journals (Sweden)

    Olga Marinova

    2016-11-01

    Full Text Available The purpose of this article is to explore the current situation and the main challenges in existing Business Intelligence (BI and Data Warehouse (DW curricula. On the base of this research, certain recommendations for their improvement are made. At the same time, the paper gives concrete guidelines for the development of a clear and comprehensive graduate profile with knowledge, skills and social competence in the field of BI and DW. This is particularly beneficial for universities and other higher education institutions, that seek to offer courses with high quality content and tendencies, adequate to the latest education, in the concerned area. The paper is written within the Erasmus plus KA2 project “Developing the innovative methodology of teaching Business Informatics” (DIMBI, 2015-1-PL01-KA203-0016636.

  1. Warehouse Logistics

    OpenAIRE

    Panibratetc, Anastasiia

    2015-01-01

    This research is a review of warehouse logistics on the example of Kannustalo Oy, located in Kannus, Western region of Finland. Kannustalo is an international company of designing, manufacturing and assembling block and turn-key houses. The research subject is logistics process in warehouse system of industrial company. In my work I discussed about theoretical aspect of logistics, logistic functions and processes. Later I considered warehouse as a part of logistics system and provided inf...

  2. Automated realtime data import for the i2b2 clinical data warehouse: introducing the HL7 ETL cell.

    Science.gov (United States)

    Majeed, Raphael W; Röhrig, Rainer

    2012-01-01

    Clinical data warehouses are used to consolidate all available clinical data from one or multiple organizations. They represent an important source for clinical research, quality management and controlling. Since its introduction, the data warehouse i2b2 gathered a large user base in the research community. Yet, little work has been done on the process of importing clinical data into data warehouses using existing standards. In this article, we present a novel approach of utilizing the clinical integration server as data source, commonly available in most hospitals. As information is transmitted through the integration server, the standardized HL7 message is immediately parsed and inserted into the data warehouse. Evaluation of import speeds suggest feasibility of the provided solution for real-time processing of HL7 messages. By using the presented approach of standardized data import, i2b2 can be used as a plug and play data warehouse, without the hurdle of customized import for every clinical information system or electronic medical record. The provided solution is available for download at http://sourceforge.net/projects/histream/.

  3. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  4. Use of a data warehouse at an academic medical center for clinical pathology quality improvement, education, and research.

    Science.gov (United States)

    Krasowski, Matthew D; Schriever, Andy; Mathur, Gagan; Blau, John L; Stauffer, Stephanie L; Ford, Bradley A

    2015-01-01

    Pathology data contained within the electronic health record (EHR), and laboratory information system (LIS) of hospitals represents a potentially powerful resource to improve clinical care. However, existing reporting tools within commercial EHR and LIS software may not be able to efficiently and rapidly mine data for quality improvement and research applications. We present experience using a data warehouse produced collaboratively between an academic medical center and a private company. The data warehouse contains data from the EHR, LIS, admission/discharge/transfer system, and billing records and can be accessed using a self-service data access tool known as Starmaker. The Starmaker software allows users to use complex Boolean logic, include and exclude rules, unit conversion and reference scaling, and value aggregation using a straightforward visual interface. More complex queries can be achieved by users with experience with Structured Query Language. Queries can use biomedical ontologies such as Logical Observation Identifiers Names and Codes and Systematized Nomenclature of Medicine. We present examples of successful searches using Starmaker, falling mostly in the realm of microbiology and clinical chemistry/toxicology. The searches were ones that were either very difficult or basically infeasible using reporting tools within the EHR and LIS used in the medical center. One of the main strengths of Starmaker searches is rapid results, with typical searches covering 5 years taking only 1-2 min. A "Run Count" feature quickly outputs the number of cases meeting criteria, allowing for refinement of searches before downloading patient-identifiable data. The Starmaker tool is available to pathology residents and fellows, with some using this tool for quality improvement and scholarly projects. A data warehouse has significant potential for improving utilization of clinical pathology testing. Software that can access data warehouse using a straightforward visual

  5. Use of a data warehouse at an academic medical center for clinical pathology quality improvement, education, and research

    Directory of Open Access Journals (Sweden)

    Matthew D Krasowski

    2015-01-01

    Full Text Available Background: Pathology data contained within the electronic health record (EHR, and laboratory information system (LIS of hospitals represents a potentially powerful resource to improve clinical care. However, existing reporting tools within commercial EHR and LIS software may not be able to efficiently and rapidly mine data for quality improvement and research applications. Materials and Methods: We present experience using a data warehouse produced collaboratively between an academic medical center and a private company. The data warehouse contains data from the EHR, LIS, admission/discharge/transfer system, and billing records and can be accessed using a self-service data access tool known as Starmaker. The Starmaker software allows users to use complex Boolean logic, include and exclude rules, unit conversion and reference scaling, and value aggregation using a straightforward visual interface. More complex queries can be achieved by users with experience with Structured Query Language. Queries can use biomedical ontologies such as Logical Observation Identifiers Names and Codes and Systematized Nomenclature of Medicine. Result: We present examples of successful searches using Starmaker, falling mostly in the realm of microbiology and clinical chemistry/toxicology. The searches were ones that were either very difficult or basically infeasible using reporting tools within the EHR and LIS used in the medical center. One of the main strengths of Starmaker searches is rapid results, with typical searches covering 5 years taking only 1-2 min. A "Run Count" feature quickly outputs the number of cases meeting criteria, allowing for refinement of searches before downloading patient-identifiable data. The Starmaker tool is available to pathology residents and fellows, with some using this tool for quality improvement and scholarly projects. Conclusion: A data warehouse has significant potential for improving utilization of clinical pathology testing

  6. Medical Big Data Warehouse: Architecture and System Design, a Case Study: Improving Healthcare Resources Distribution.

    Science.gov (United States)

    Sebaa, Abderrazak; Chikh, Fatima; Nouicer, Amina; Tari, AbdelKamel

    2018-02-19

    The huge increases in medical devices and clinical applications which generate enormous data have raised a big issue in managing, processing, and mining this massive amount of data. Indeed, traditional data warehousing frameworks can not be effective when managing the volume, variety, and velocity of current medical applications. As a result, several data warehouses face many issues over medical data and many challenges need to be addressed. New solutions have emerged and Hadoop is one of the best examples, it can be used to process these streams of medical data. However, without an efficient system design and architecture, these performances will not be significant and valuable for medical managers. In this paper, we provide a short review of the literature about research issues of traditional data warehouses and we present some important Hadoop-based data warehouses. In addition, a Hadoop-based architecture and a conceptual data model for designing medical Big Data warehouse are given. In our case study, we provide implementation detail of big data warehouse based on the proposed architecture and data model in the Apache Hadoop platform to ensure an optimal allocation of health resources.

  7. Demand Response Opportunities in Industrial Refrigerated Warehouses in California

    Energy Technology Data Exchange (ETDEWEB)

    Goli, Sasank; McKane, Aimee; Olsen, Daniel

    2011-06-14

    Industrial refrigerated warehouses that implemented energy efficiency measures and have centralized control systems can be excellent candidates for Automated Demand Response (Auto-DR) due to equipment synergies, and receptivity of facility managers to strategies that control energy costs without disrupting facility operations. Auto-DR utilizes OpenADR protocol for continuous and open communication signals over internet, allowing facilities to automate their Demand Response (DR). Refrigerated warehouses were selected for research because: They have significant power demand especially during utility peak periods; most processes are not sensitive to short-term (2-4 hours) lower power and DR activities are often not disruptive to facility operations; the number of processes is limited and well understood; and past experience with some DR strategies successful in commercial buildings may apply to refrigerated warehouses. This paper presents an overview of the potential for load sheds and shifts from baseline electricity use in response to DR events, along with physical configurations and operating characteristics of refrigerated warehouses. Analysis of data from two case studies and nine facilities in Pacific Gas and Electric territory, confirmed the DR abilities inherent to refrigerated warehouses but showed significant variation across facilities. Further, while load from California's refrigerated warehouses in 2008 was 360 MW with estimated DR potential of 45-90 MW, actual achieved was much less due to low participation. Efforts to overcome barriers to increased participation may include, improved marketing and recruitment of potential DR sites, better alignment and emphasis on financial benefits of participation, and use of Auto-DR to increase consistency of participation.

  8. Evaluating a healthcare data warehouse for cancer diseases

    OpenAIRE

    Sheta, Dr. Osama E.; Eldeen, Ahmed Nour

    2013-01-01

    This paper presents the evaluation of the architecture of healthcare data warehouse specific to cancer diseases. This data warehouse containing relevant cancer medical information and patient data. The data warehouse provides the source for all current and historical health data to help executive manager and doctors to improve the decision making process for cancer patients. The evaluation model based on Bill Inmon's definition of data warehouse is proposed to evaluate the Cancer data warehouse.

  9. Warehouse Sanitation Workshop Handbook.

    Science.gov (United States)

    Food and Drug Administration (DHHS/PHS), Washington, DC.

    This workshop handbook contains information and reference materials on proper food warehouse sanitation. The materials have been used at Food and Drug Administration (FDA) food warehouse sanitation workshops, and are selected by the FDA for use by food warehouse operators and for training warehouse sanitation employees. The handbook is divided…

  10. Conceptual Data Warehouse Structures

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    changing information needs. We show how the event-entity-relationship model (EVER) can be used for schema design and query formulation in data warehouses. Our work is based on a layered data warehouse architecture in which a global data warehouse is used for flexible long-term organization and storage...... of all warehouse data whereas local data warehouses are used for efficient query formulation and answering. In order to support flexible modeling of global warehouses we use a flexible version of EVER for global schema design. In order to support efficient query formulation in local data warehouses we...

  11. Improving treatment plan evaluation with automation

    Science.gov (United States)

    Covington, Elizabeth L.; Chen, Xiaoping; Younge, Kelly C.; Lee, Choonik; Matuszak, Martha M.; Kessler, Marc L.; Keranen, Wayne; Acosta, Eduardo; Dougherty, Ashley M.; Filpansick, Stephanie E.

    2016-01-01

    The goal of this work is to evaluate the effectiveness of Plan‐Checker Tool (PCT) which was created to improve first‐time plan quality, reduce patient delays, increase the efficiency of our electronic workflow, and standardize and automate the physics plan review in the treatment planning system (TPS). PCT uses an application programming interface to check and compare data from the TPS and treatment management system (TMS). PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user as part of a plan readiness check for treatment. Prior to and during PCT development, errors identified during the physics review and causes of patient treatment start delays were tracked to prioritize which checks should be automated. Nineteen of 33 checklist items were automated, with data extracted with PCT. There was a 60% reduction in the number of patient delays in the six months after PCT release. PCT was successfully implemented for use on all external beam treatment plans in our clinic. While the number of errors found during the physics check did not decrease, automation of checks increased visibility of errors during the physics check, which led to decreased patient delays. The methods used here can be applied to any TMS and TPS that allows queries of the database. PACS number(s): 87.55.‐x, 87.55.N‐, 87.55.Qr, 87.55.tm, 89.20.Bb PMID:27929478

  12. Energy Finance Data Warehouse Manual

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangkeun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chinthavali, Supriya [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shankar, Mallikarjun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zeng, Claire [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hendrickson, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-30

    The Office of Energy Policy and Systems Analysis s finance team (EPSA-50) requires a suite of automated applications that can extract specific data from a flexible data warehouse (where datasets characterizing energy-related finance, economics and markets are maintained and integrated), perform relevant operations and creatively visualize them to provide a better understanding of what policy options affect various operators/sectors of the electricity system. In addition, the underlying data warehouse should be structured in the most effective and efficient way so that it can become increasingly valuable over time. This report describes the Energy Finance Data Warehouse (EFDW) framework that has been developed to accomplish the defined requirement above. We also specifically dive into the Sankey generator use-case scenario to explain the components of the EFDW framework and their roles. An excel-based data warehouse was used in the creation of the energy finance Sankey diagram and other detailed data finance visualizations to support energy policy analysis. The framework also captures the methodology, calculations and estimations analysts used for the calculation as well as relevant sources so newer analysts can build on work done previously.

  13. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  14. Does Automated Feedback Improve Writing Quality?

    Science.gov (United States)

    Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.

    2014-01-01

    The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…

  15. Envirofacts Data Warehouse

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Envirofacts Data Warehouse contains information from select EPA Environmental program office databases and provides access about environmental activities that...

  16. A Proposed Strategy for the U.S. to Develop and Maintain a Mainstream Capability Suite ("Warehouse") for Automated/Autonomous Rendezvous and Docking in Low Earth Orbit and Beyond

    Science.gov (United States)

    Krishnakumar, Kalmanje S.; Stillwater, Ryan A.; Babula, Maria; Moreau, Michael C.; Riedel, J. Ed; Mrozinski, Richard B.; Bradley, Arthur; Bryan, Thomas C.

    2012-01-01

    The ability of space assets to rendezvous and dock/capture/berth is a fundamental enabler for numerous classes of NASA fs missions, and is therefore an essential capability for the future of NASA. Mission classes include: ISS crew rotation, crewed exploration beyond low-Earth-orbit (LEO), on-orbit assembly, ISS cargo supply, crewed satellite servicing, robotic satellite servicing / debris mitigation, robotic sample return, and robotic small body (e.g. near-Earth object, NEO) proximity operations. For a variety of reasons to be described, NASA programs requiring Automated/Autonomous Rendezvous and Docking/Capture/Berthing (AR&D) capabilities are currently spending an order-of-magnitude more than necessary and taking twice as long as necessary to achieve their AR&D capability, "reinventing the wheel" for each program, and have fallen behind all of our foreign counterparts in AR&D technology (especially autonomy) in the process. To ensure future missions' reliability and crew safety (when applicable), to achieve the noted cost and schedule savings by eliminate costs of continually "reinventing the wheel ", the NASA AR&D Community of Practice (CoP) recommends NASA develop an AR&D Warehouse, detailed herein, which does not exist today. The term "warehouse" is used herein to refer to a toolbox or capability suite that has pre-integrated selectable supply-chain hardware and reusable software components that are considered ready-to-fly, low-risk, reliable, versatile, scalable, cost-effective, architecture and destination independent, that can be confidently utilized operationally on human spaceflight and robotic vehicles over a variety of mission classes and design reference missions, especially beyond LEO. The CoP also believes that it is imperative that NASA coordinate and integrate all current and proposed technology development activities into a cohesive cross-Agency strategy to produce and utilize this AR&D warehouse. An initial estimate indicates that if NASA

  17. Envirofacts Data Warehouse

    Science.gov (United States)

    The Envirofacts Data Warehouse contains information from select EPA Environmental program office databases and provides access about environmental activities that may affect air, water, and land anywhere in the United States. The Envirofacts Warehouse supports its own web enabled tools as well as a host of other EPA applications.

  18. Building a Data Warehouse.

    Science.gov (United States)

    Levine, Elliott

    2002-01-01

    Describes how to build a data warehouse, using the Schools Interoperability Framework (www.sifinfo.org), that supports data-driven decision making and complies with the Freedom of Information Act. Provides several suggestions for building and maintaining a data warehouse. (PKP)

  19. Intelligent environmental data warehouse

    International Nuclear Information System (INIS)

    Ekechukwu, B.

    1998-01-01

    Making quick and effective decisions in environment management are based on multiple and complex parameters, a data warehouse is a powerful tool for the over all management of massive environmental information. Selecting the right data from a warehouse is an important factor consideration for end-users. This paper proposed an intelligent environmental data warehouse system. It consists of data warehouse to feed an environmental researchers and managers with desire environmental information needs to their research studies and decision in form of geometric and attribute data for study area, and a metadata for the other sources of environmental information. In addition, the proposed intelligent search engine works according to a set of rule, which enables the system to be aware of the environmental data wanted by the end-user. The system development process passes through four stages. These are data preparation, warehouse development, intelligent engine development and internet platform system development. (author)

  20. Does Automation Improve Stock Market Efficiency? Evidence from Ghana

    OpenAIRE

    Mensah, Justice T.; Pomaa-Berko, Maame; Adom, Philip Kofi

    2012-01-01

    As a burgeoning capital market in an emerging economy, automation of the stock market is regarded as a major step towards integrating the financial market as a conduit for economic growth. The automation of the Ghana Stock Exchange (GSE) in 2008 is expected among other things to improve the efficiency of the market. This paper therefore investigates the impact of the automation on the efficiency of the GSE within the framework of the weak-form Efficient Market Hypothesis (EMH) using daily mar...

  1. Does automation improve stock market efficiency in Ghana ...

    African Journals Online (AJOL)

    The automation of the Ghana Stock Exchange (GSE) in 2008, among other reforms, was expected to improve the efficiency of the market. The extent of this truism has, however, not been empirically established for the GSE. In this study, we attempt to assess the impact of the automation on the efficiency of the GSE within the ...

  2. Chronic Condition Data Warehouse

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Chronic Condition Data Warehouse (CCW) provides researchers with Medicare and Medicaid beneficiary, claims, and assessment data linked by beneficiary across...

  3. HRSA Data Warehouse

    Data.gov (United States)

    U.S. Department of Health & Human Services — The HRSA Data Warehouse is the go-to source for data, maps, reports, locators, and dashboards on HRSA's public health programs. This website provides a wide variety...

  4. Public Refrigerated Warehouses

    Data.gov (United States)

    Department of Homeland Security — The International Association of Refrigerated Warehouses (IARW) came into existence in 1891 when a number of conventional warehousemen took on the demands of storing...

  5. Decision method for optimal selection of warehouse material handling strategies by production companies

    Science.gov (United States)

    Dobos, P.; Tamás, P.; Illés, B.

    2016-11-01

    Adequate establishment and operation of warehouse logistics determines the companies’ competitiveness significantly because it effects greatly the quality and the selling price of the goods that the production companies produce. In order to implement and manage an adequate warehouse system, adequate warehouse position, stock management model, warehouse technology, motivated work force committed to process improvement and material handling strategy are necessary. In practical life, companies have paid small attantion to select the warehouse strategy properly. Although it has a major influence on the production in the case of material warehouse and on smooth costumer service in the case of finished goods warehouse because this can happen with a huge loss in material handling. Due to the dynamically changing production structure, frequent reorganization of warehouse activities is needed, on what the majority of the companies react basically with no reactions. This work presents a simulation test system frames for eligible warehouse material handling strategy selection and also the decision method for selection.

  6. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program

  7. Building the Readiness Data Warehouse

    National Research Council Canada - National Science Library

    Tysor, Sue

    2000-01-01

    .... This is the role of the data warehouse. The data warehouse will deliver business intelligence based on operational data, decision support data and external data to all business units in the organization...

  8. A UHF RFID positioning system for use in warehouse navigation by employees with cognitive disability.

    Science.gov (United States)

    Gunther, Eric J M; Sliker, Levin J; Bodine, Cathy

    2017-11-01

    Unemployment among the almost 5 million working-age adults with cognitive disabilities in the USA is a costly problem in both tax dollars and quality of life. Job coaching is an effective tool to overcome this, but the cost of job coaching services sums with every new employee or change of employment roles. There is a need for a cost-effective, automated alternative to job coaching that incurs a one-time cost and can be reused for multiple employees or roles. An effective automated job coach must be aware of its location and the location of destinations within the job site. This project presents a design and prototype of a cart-mounted indoor positioning and navigation system with necessary original software using Ultra High Frequency Radio Frequency Identification (UHF RFID). The system presented in this project for use within a warehouse setting is one component of an automated job coach to assist in the job of order filler. The system demonstrated accuracy to within 0.3 m under the correct conditions with strong potential to serve as the basis for an effective indoor navigation system to assist warehouse workers with disabilities. Implications for rehabilitation An automated job coach could improve employability of and job retention for people with cognitive disabilities. An indoor navigation system using ultra high frequency radio frequency identification was proposed with an average positioning accuracy of 0.3 m. The proposed system, in combination with a non-linear context-aware prompting system, could be used as an automated job coach for warehouse order fillers with cognitive disabilities.

  9. Contextualizing Data Warehouses with Documents

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    2008-01-01

    warehouse with a document warehouse, resulting in a contextualized warehouse. Thus, the user first selects an analysis context by supplying some keywords. Then, the analysis is performed on a novel type of OLAP cube, called an R-cube, which is materialized by retrieving and ranking the documents...

  10. DATA WAREHOUSES SECURITY IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Burtescu Emil

    2009-05-01

    Full Text Available Data warehouses were initially implemented and developed by the big firms and they were used for working out the managerial problems and for making decisions. Later on, because of the economic tendencies and of the technological progress, the data warehou

  11. Configurable Web Warehouses construction through BPM Systems

    Directory of Open Access Journals (Sweden)

    Andrea Delgado

    2016-08-01

    Full Text Available The process of building Data Warehouses (DW is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM System allows modeling and executing Business Processes (BPs providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.

  12. Congestion-Aware Warehouse Flow Analysis and Optimization

    KAUST Repository

    AlHalawani, Sawsan

    2015-12-18

    Generating realistic configurations of urban models is a vital part of the modeling process, especially if these models are used for evaluation and analysis. In this work, we address the problem of assigning objects to their storage locations inside a warehouse which has a great impact on the quality of operations within a warehouse. Existing storage policies aim to improve the efficiency by minimizing travel time or by classifying the items based on some features. We go beyond existing methods as we analyze warehouse layout network in an attempt to understand the factors that affect traffic within the warehouse. We use simulated annealing based sampling to assign items to their storage locations while reducing traffic congestion and enhancing the speed of order picking processes. The proposed method enables a range of applications including efficient storage assignment, warehouse reliability evaluation and traffic congestion estimation.

  13. Congestion-Aware Warehouse Flow Analysis and Optimization

    KAUST Repository

    AlHalawani, Sawsan; Mitra, Niloy J.

    2015-01-01

    Generating realistic configurations of urban models is a vital part of the modeling process, especially if these models are used for evaluation and analysis. In this work, we address the problem of assigning objects to their storage locations inside a warehouse which has a great impact on the quality of operations within a warehouse. Existing storage policies aim to improve the efficiency by minimizing travel time or by classifying the items based on some features. We go beyond existing methods as we analyze warehouse layout network in an attempt to understand the factors that affect traffic within the warehouse. We use simulated annealing based sampling to assign items to their storage locations while reducing traffic congestion and enhancing the speed of order picking processes. The proposed method enables a range of applications including efficient storage assignment, warehouse reliability evaluation and traffic congestion estimation.

  14. WAREHOUSE PERFORMANCE MEASUREMENT - A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Crisan Emil

    2009-05-01

    Full Text Available Companies could gain cost advantage using their logistics area of the business. Warehouse management is a possible source of cost improvements from logistics that companies could use during this economic crisis. The goal of this article is to expose a few

  15. Improving a full-text search engine: the importance of negation detection and family history context to identify cases in a biomedical data warehouse.

    Science.gov (United States)

    Garcelon, Nicolas; Neuraz, Antoine; Benoit, Vincent; Salomon, Rémi; Burgun, Anita

    2017-05-01

    The repurposing of electronic health records (EHRs) can improve clinical and genetic research for rare diseases. However, significant information in rare disease EHRs is embedded in the narrative reports, which contain many negated clinical signs and family medical history. This paper presents a method to detect family history and negation in narrative reports and evaluates its impact on selecting populations from a clinical data warehouse (CDW). We developed a pipeline to process 1.6 million reports from multiple sources. This pipeline is part of the load process of the Necker Hospital CDW. We identified patients with "Lupus and diarrhea," "Crohn's and diabetes," and "NPHP1" from the CDW. The overall precision, recall, specificity, and F-measure were 0.85, 0.98, 0.93, and 0.91, respectively. The proposed method generates a highly accurate identification of cases from a CDW of rare disease EHRs. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Improving English Pronunciation: An Automated Instructional Approach

    Directory of Open Access Journals (Sweden)

    Sugata Mitra

    2003-01-01

    Full Text Available This paper describes an experiment in which groups of children attempted to improve their English pronunciation using an English-language learning software, some English films, and a speech-to-text software engine. The experiment was designed to examine two hypotheses. The first is that speech-to-text software, trained in an appropriate voice, can act as an evaluator of accent and clarity of speech as well as help learners acquire a standard way of speaking. The second is that groups of children can operate a computer and improve their pronunciation and clarity of speech, on their own, with no intervention from teachers. The results of the experiment are positive and point to a possible new pedagogy.

  17. Improving CBIR Systems Using Automated Ranking

    Directory of Open Access Journals (Sweden)

    B. D. Reljin

    2012-11-01

    Full Text Available The most common way of searching images on the Internet and in private collections is based on a similarity measuring of a series of text words that are assigned to each image with users query series. This method imposes strong constraints (the number of words to describe the image, the time necessary to thoroughly describe the subjective experience of images, the level of details in the picture, language barrier, etc., and is therefore very inefficient. Modern researches in this area are focused on the contentbased searching images (CBIR. In this way, all described disadvantages are overcome and the quality of searching results is improved. This paper presents a solution for CBIR systems where the search procedure is enhanced using sophisticated extraction and ranking of extracted images. The searching procedure is based on extraction and preprocessing of a large number of low level image features. Thus, when the user defines a query image, the proposed algorithm based on artificial intelligence, shows to the user a group of images which are most similar to a query image by content. The proposed algorithm is iterative, so the user can direct the searching procedure to an expected outcome and get a set of images that are more similar to the query one.

  18. The Data Warehouse Lifecycle Toolkit

    CERN Document Server

    Kimball, Ralph; Thornthwaite, Warren; Mundy, Joy; Becker, Bob

    2011-01-01

    A thorough update to the industry standard for designing, developing, and deploying data warehouse and business intelligence systemsThe world of data warehousing has changed remarkably since the first edition of The Data Warehouse Lifecycle Toolkit was published in 1998. In that time, the data warehouse industry has reached full maturity and acceptance, hardware and software have made staggering advances, and the techniques promoted in the premiere edition of this book have been adopted by nearly all data warehouse vendors and practitioners. In addition, the term "business intelligence" emerge

  19. Health Claims Data Warehouse (HCDW)

    Data.gov (United States)

    Office of Personnel Management — The Health Claims Data Warehouse (HCDW) will receive and analyze health claims data to support management and administrative purposes. The Federal Employee Health...

  20. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    Science.gov (United States)

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  1. The Community Health Applied Research Network (CHARN) Data Warehouse: a Resource for Patient-Centered Outcomes Research and Quality Improvement in Underserved, Safety Net Populations.

    Science.gov (United States)

    Laws, Reesa; Gillespie, Suzanne; Puro, Jon; Van Rompaey, Stephan; Quach, Thu; Carroll, Joseph; Weir, Rosy Chang; Crawford, Phil; Grasso, Chris; Kaleba, Erin; McBurnie, Mary Ann

    2014-01-01

    The Community Health Applied Research Network, funded by the Health Resources and Services Administration, is a research network comprising 18 Community Health Centers organized into four Research Nodes (each including an academic partner) and a data coordinating center. The network represents more than 500,000 diverse safety net patients across 11 states. The primary objective of this paper is to describe the development and implementation process of the CHARN data warehouse. The methods involved regulatory and governance development and approval, development of content and structure of the warehouse and processes for extracting the data locally, performing validation, and finally submitting data to the data coordinating center. Version 1 of the warehouse has been developed. Tables have been added, the population and the years of electronic health records (EHR) have been expanded for Version 2. It is feasible to create a national, centralized data warehouse with multiple Community Health Center partners using different EHR systems. It is essential to allow sufficient time: (1) to develop collaborative, trusting relationships among new partners with varied technology, backgrounds, expertise, and interests; (2) to complete institutional, business, and regulatory review processes; (3) to identify and address technical challenges associated with diverse data environments, practices, and resources; and (4) to provide continuing data quality assessments to ensure data accuracy.

  2. Improved compliance by BPM-driven workflow automation.

    Science.gov (United States)

    Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin

    2014-12-01

    Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.

  3. Test Automation Process Improvement A case study of BroadSoft

    OpenAIRE

    Gummadi, Jalendar

    2016-01-01

    This master thesis research is about improvement of test automation process at BroadSoft Finland as a case study. Test automation project recently started at BroadSoft but the project is not properly integrated in to existing process. Project is about converting manual test cases to automation test cases. The aim of this thesis is about studying existing BroadSoft test process and studying different test automation frameworks. In this thesis different test automation process are studied ...

  4. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial.

    Science.gov (United States)

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-07-01

    Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (pdata collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial

    International Nuclear Information System (INIS)

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-01-01

    Introduction: Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods: In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results: The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions: Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases

  6. A Multidimensional Data Warehouse for Community Health Centers.

    Science.gov (United States)

    Kunjan, Kislaya; Toscos, Tammy; Turkcan, Ayten; Doebbeling, Brad N

    2015-01-01

    Community health centers (CHCs) play a pivotal role in healthcare delivery to vulnerable populations, but have not yet benefited from a data warehouse that can support improvements in clinical and financial outcomes across the practice. We have developed a multidimensional clinic data warehouse (CDW) by working with 7 CHCs across the state of Indiana and integrating their operational, financial and electronic patient records to support ongoing delivery of care. We describe in detail the rationale for the project, the data architecture employed, the content of the data warehouse, along with a description of the challenges experienced and strategies used in the development of this repository that may help other researchers, managers and leaders in health informatics. The resulting multidimensional data warehouse is highly practical and is designed to provide a foundation for wide-ranging healthcare data analytics over time and across the community health research enterprise.

  7. Improving automated load flexibility of nuclear power plants with ALFC

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, Andreas [AREVA GmbH, Karlstein (Germany). Plant Control/Training; Klaus, Peter [E.ON NPP Isar 2, Essenbach (Germany). Plant Operation/Production Engineering

    2016-07-01

    In several German and Swiss Nuclear Power Plants with Pressurized Water Reactor (PWR) the control of the reactor power was and will be improved in order to be able to support the energy transition with increasing volatile renewable energy in the grid by flexible load operation according to the need of the load dispatcher (power system stability). Especially regarding the mentioned German NPPs with a nominal electric power of approx. 1,500 MW, the general objectives are the main automated grid relevant operation modes. The new possibilities of digital I and C (as TELEPERM {sup registered} XS) enable the automation of the operating modes provided that manual support is no longer necessary. These possibilities were and will be implemented by AREVA within the ALFC-projects. Manifold adaption algorithms to the reactor physical variations during the nuclear load cycle enable a precise control of the axial power density distribution and of the reactivity management in the reactor core. Finally this is the basis for a highly automated load flexibility with the parallel respect and surveillance of the operational limits of a PWR.

  8. Improving automated load flexibility of nuclear power plants with ALFC

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, Andreas [AREVA GmbH, Karlstein (Germany). Section Manager Training; Klaus, Peter [Preussenelektra NPP, Essenbach (Germany). Production Engineering

    2017-03-15

    In several German and Swiss Nuclear Power Plants with Pressurized Water Reactor (PWR) the control of the reactor power was and will be improved in order to be able to support the energy transition with increasing volatile renewable energy in the grid by flexible load operation according to the need of the load dispatcher (power system stability). Especially regarding the mentioned German NPPs with a nominal electric power of approx. 1500 MW, the general objectives are several automated grid relevant operation modes. The new possibilities of digital I and C (as TELEPERM {sup registered} XS) enable the automation of this operating modes provided that manual support is no longer necessary. These possibilities were and will be implemented by AREVA within the ALFC-projects. Manifold adaption algorithms to the reactor physical variations during the nuclear load cycle enable a precise control of the axial power density distribution and of the reactivity manage - ment in the reactor core. Finally this is the basis for a highly automated load flexibility with the parallel respect and surveillance of the operational limits of a PWR.

  9. Improving medical stores management through automation and effective communication.

    Science.gov (United States)

    Kumar, Ashok; Cariappa, M P; Marwaha, Vishal; Sharma, Mukti; Arora, Manu

    2016-01-01

    Medical stores management in hospitals is a tedious and time consuming chore with limited resources tasked for the purpose and poor penetration of Information Technology. The process of automation is slow paced due to various inherent factors and is being challenged by the increasing inventory loads and escalating budgets for procurement of drugs. We carried out an indepth case study at the Medical Stores of a tertiary care health care facility. An iterative six step Quality Improvement (QI) process was implemented based on the Plan-Do-Study-Act (PDSA) cycle. The QI process was modified as per requirement to fit the medical stores management model. The results were evaluated after six months. After the implementation of QI process, 55 drugs of the medical store inventory which had expired since 2009 onwards were replaced with fresh stock by the suppliers as a result of effective communication through upgraded database management. Various pending audit objections were dropped due to the streamlined documentation and processes. Inventory management improved drastically due to automation, with disposal orders being initiated four months prior to the expiry of drugs and correct demands being generated two months prior to depletion of stocks. The monthly expense summary of drugs was now being done within ten days of the closing month. Improving communication systems within the hospital with vendor database management and reaching out to clinicians is important. Automation of inventory management requires to be simple and user-friendly, utilizing existing hardware. Physical stores monitoring is indispensable, especially due to the scattered nature of stores. Staff training and standardized documentation protocols are the other keystones for optimal medical store management.

  10. Toward best practice: leveraging the electronic patient record as a clinical data warehouse.

    Science.gov (United States)

    Ledbetter, C S; Morgan, M W

    2001-01-01

    Automating clinical and administrative processes via an electronic patient record (EPR) gives clinicians the point-of-care tools they need to deliver better patient care. However, to improve clinical practice as a whole and then evaluate it, healthcare must go beyond basic automation and convert EPR data into aggregated, multidimensional information. Unfortunately, few EPR systems have the established, powerful analytical clinical data warehouses (CDWs) required for this conversion. This article describes how an organization can support best practice by leveraging a CDW that is fully integrated into its EPR and clinical decision support (CDS) system. The article (1) discusses the requirements for comprehensive CDS, including on-line analytical processing (OLAP) of data at both transactional and aggregate levels, (2) suggests that the transactional data acquired by an OLTP EPR system must be remodeled to support retrospective, population-based, aggregate analysis of those data, and (3) concludes that this aggregate analysis is best provided by a separate CDW system.

  11. Improved automated perimetry performance following exposure to Mozart.

    Science.gov (United States)

    Fiorelli, V Macedo Batista; Kasahara, N; Cohen, R; França, A Santucci; Della Paolera, M; Mandia, C; de Almeida, G Vicente

    2006-05-01

    To evaluate the performance on automated perimetry (AP) after listening to a Mozart sonata in normal subjects naive to AP. 60 naive normal subjects underwent AP (SITA 24-2). The study group (30 subjects) underwent AP after listening to Mozart's Sonata for Two Pianos in D Major and the control group (30 subjects) underwent AP without previous exposure to the music. The study group had significantly less fixation loss, false positive, and false negative rates compared to controls (p Mozart seems to improve AP performance in normal naive subjects.

  12. A study on building data warehouse of hospital information system.

    Science.gov (United States)

    Li, Ping; Wu, Tao; Chen, Mu; Zhou, Bin; Xu, Wei-guo

    2011-08-01

    Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hospitals, such as in the case of the regional coordination of medical services. In this study, to integrate and make full use of medical data effectively, we propose a data warehouse modeling method for the hospital information system. The method can also be employed for a distributed-hospital medical service system. To ensure that hospital information supports the diverse needs of health care, the framework of the hospital information system has three layers: datacenter layer, system-function layer, and user-interface layer. This paper discusses the role of a data warehouse management system in handling hospital information from the establishment of the data theme to the design of a data model to the establishment of a data warehouse. Online analytical processing tools assist user-friendly multidimensional analysis from a number of different angles to extract the required data and information. Use of the data warehouse improves online analytical processing and mitigates deficiencies in the decision support system. The hospital information system based on a data warehouse effectively employs statistical analysis and data mining technology to handle massive quantities of historical data, and summarizes from clinical and hospital information for decision making. This paper proposes the use of a data warehouse for a hospital information system, specifically a data warehouse for the theme of hospital information to determine latitude, modeling and so on. The processing of patient information is given as an example that demonstrates the usefulness of this method in the case of hospital information management. Data warehouse technology is an evolving technology, and more and more decision support information extracted by data mining and with decision-making technology is

  13. University Accreditation using Data Warehouse

    Science.gov (United States)

    Sinaga, A. S.; Girsang, A. S.

    2017-01-01

    The accreditation aims assuring the quality the quality of the institution education. The institution needs the comprehensive documents for giving the information accurately before reviewed by assessor. Therefore, academic documents should be stored effectively to ease fulfilling the requirement of accreditation. However, the data are generally derived from various sources, various types, not structured and dispersed. This paper proposes designing a data warehouse to integrate all various data to prepare a good academic document for accreditation in a university. The data warehouse is built using nine steps that was introduced by Kimball. This method is applied to produce a data warehouse based on the accreditation assessment focusing in academic part. The data warehouse shows that it can analyse the data to prepare the accreditation assessment documents.

  14. Operational management system for warehouse logistics of metal trading companies

    Directory of Open Access Journals (Sweden)

    Khayrullin Rustam Zinnatullovich

    2014-07-01

    Full Text Available Logistics is an effective tool in business management. Metal trading business is a part of metal promotion chain from producer to consumer. It's designed to serve as a link connecting the interests of steel producers and end users. We should account for the specifics warehousing trading. The specificity of warehouse metal trading consists primarily in the fact that the purchase is made in large lots, and the sale - in medium and small parties. Loading and unloading of cars and trucks is produced by overhead cranes. Some part of the purchased goods are shipped in relatively large lots without presales preparation. Another part of the goods undergoes presale preparation. Indoor and outdoor warehouses are used with the address storage system. In the process of prolonged storage the metal rusts. Some part of the goods is subjected to final completion (cutting, welding, coloration in service centers and small factories, usually located at the warehouse. The quantity of simultaneously shipped cars, and the quantity of the loader workers brigade can reach few dozens. So it is necessary to control the loading workers, to coordinate and monitor the performance of loading and unloading operations, to make the daily analysis of their work, to evaluate the warehouse operations as a whole. There is a need to manage and control movement of cars and trucks on the warehouse territory to reduce storage and transport costs and improve customer service. ERP-systems and WMS-systems, which are widely used, do not cover fully the functions and processes of the warehouse trading, and do not effectively manage all logistics processes. In this paper the specialized software is proposed. The software is intended for operational logistics management in warehouse metal products trading. The basic functions and processes of metal warehouse trading are described. The effectiveness indices for logistics processes and key effective indicators of warehouse trading are proposed

  15. Improving automated 3D reconstruction methods via vision metrology

    Science.gov (United States)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  16. Improving Quality and Occupational Safety on Automated Casting Lines

    Directory of Open Access Journals (Sweden)

    Kukla S.

    2017-09-01

    Full Text Available The paper presents a practical example of improving quality and occupational safety on automated casting lines. Working conditions on the line of box moulding with horizontal mould split were analysed due to low degree of automation at the stage of cores or filters installation as well as spheroidizing mortar dosing. A simulation analysis was carried out, which was related to the grounds of introducing an automatic mortar dispenser to the mould. To carry out the research, a simulation model of a line in universal Arena software for modelling and simulation of manufacturing systems by Rockwell Software Inc. was created. A simulation experiment was carried out on a model in order to determine basic parameters of the working system. Organization and working conditions in other sections of the line were also analysed, paying particular attention to quality, ergonomics and occupational safety. Ergonomics analysis was carried out on manual cores installation workplace and filters installation workplace, and changes to these workplaces were suggested in order to eliminate actions being unnecessary and onerous for employees.

  17. Metadata to Support Data Warehouse Evolution

    Science.gov (United States)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  18. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  19. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  20. Security Data Warehouse Application

    Science.gov (United States)

    Vernon, Lynn R.; Hennan, Robert; Ortiz, Chris; Gonzalez, Steve; Roane, John

    2012-01-01

    The Security Data Warehouse (SDW) is used to aggregate and correlate all JSC IT security data. This includes IT asset inventory such as operating systems and patch levels, users, user logins, remote access dial-in and VPN, and vulnerability tracking and reporting. The correlation of this data allows for an integrated understanding of current security issues and systems by providing this data in a format that associates it to an individual host. The cornerstone of the SDW is its unique host-mapping algorithm that has undergone extensive field tests, and provides a high degree of accuracy. The algorithm comprises two parts. The first part employs fuzzy logic to derive a best-guess host assignment using incomplete sensor data. The second part is logic to identify and correct errors in the database, based on subsequent, more complete data. Host records are automatically split or merged, as appropriate. The process had to be refined and thoroughly tested before the SDW deployment was feasible. Complexity was increased by adding the dimension of time. The SDW correlates all data with its relationship to time. This lends support to forensic investigations, audits, and overall situational awareness. Another important feature of the SDW architecture is that all of the underlying complexities of the data model and host-mapping algorithm are encapsulated in an easy-to-use and understandable Perl language Application Programming Interface (API). This allows the SDW to be quickly augmented with additional sensors using minimal coding and testing. It also supports rapid generation of ad hoc reports and integration with other information systems.

  1. Development of a Clinical Data Warehouse for Hospital Infection Control

    Science.gov (United States)

    Wisniewski, Mary F.; Kieszkowski, Piotr; Zagorski, Brandon M.; Trick, William E.; Sommers, Michael; Weinstein, Robert A.

    2003-01-01

    Existing data stored in a hospital's transactional servers have enormous potential to improve performance measurement and health care quality. Accessing, organizing, and using these data to support research and quality improvement projects are evolving challenges for hospital systems. The authors report development of a clinical data warehouse that they created by importing data from the information systems of three affiliated public hospitals. They describe their methodology; difficulties encountered; responses from administrators, computer specialists, and clinicians; and the steps taken to capture and store patient-level data. The authors provide examples of their use of the clinical data warehouse to monitor antimicrobial resistance, to measure antimicrobial use, to detect hospital-acquired bloodstream infections, to measure the cost of infections, and to detect antimicrobial prescribing errors. In addition, they estimate the amount of time and money saved and the increased precision achieved through the practical application of the data warehouse. PMID:12807807

  2. Development of a clinical data warehouse for hospital infection control.

    Science.gov (United States)

    Wisniewski, Mary F; Kieszkowski, Piotr; Zagorski, Brandon M; Trick, William E; Sommers, Michael; Weinstein, Robert A

    2003-01-01

    Existing data stored in a hospital's transactional servers have enormous potential to improve performance measurement and health care quality. Accessing, organizing, and using these data to support research and quality improvement projects are evolving challenges for hospital systems. The authors report development of a clinical data warehouse that they created by importing data from the information systems of three affiliated public hospitals. They describe their methodology; difficulties encountered; responses from administrators, computer specialists, and clinicians; and the steps taken to capture and store patient-level data. The authors provide examples of their use of the clinical data warehouse to monitor antimicrobial resistance, to measure antimicrobial use, to detect hospital-acquired bloodstream infections, to measure the cost of infections, and to detect antimicrobial prescribing errors. In addition, they estimate the amount of time and money saved and the increased precision achieved through the practical application of the data warehouse.

  3. Structuring warehouse management : Exploring the fit between warehouse characteristics and warehouse planning and control structure, and its effect on warehouse performance

    NARCIS (Netherlands)

    N. Faber (Nynke)

    2015-01-01

    markdownabstractThis dissertation studies the management processes that plan, control, and optimize warehouse operations. The inventory in warehouses decouples supply from demand. As such, economies of scale can be achieved in production, purchasing, and transport. As warehouses become more and more

  4. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    Science.gov (United States)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  5. Electronic warehouse receipts registry as a step from paper to electronic warehouse receipts

    Directory of Open Access Journals (Sweden)

    Kovačević Vlado

    2016-01-01

    Full Text Available The aim of this paper is to determine the economic viability of the electronic warehouse receipt registry introduction, as a step toward electronic warehouse receipts. Both forms of warehouse receipt paper and electronic exist in practice, but paper warehouse receipts are more widespread. In this paper, the dematerialization process is analyzed in two steps. The first step is the dematerialization of warehouse receipt registry, with warehouse receipts still in paper form. The second step is the introduction of electronic warehouse receipts themselves. Dematerialization of warehouse receipts is more complex than that for financial securities, because of the individual characteristics of each warehouse receipt. As a consequence, electronic warehouse receipts are in place for only to a handful of commodities, namely cotton and a few grains. Nevertheless, the movement towards the electronic warehouse receipt, which began several decades ago with financial securities, is now taking hold in the agricultural sector. In this paper is analyzed Serbian electronic registry, since the Serbia is first country in EU with electronic warehouse receipts registry donated by FAO. Performed analysis shows the considerable impact of electronic warehouse receipts registry establishment on enhancing the security of the system of public warehouses, and on advancing the trade with warehouse receipt.

  6. Refrigerated Warehouse Demand Response Strategy Guide

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Doug [VaCom Technologies, San Luis Obispo, CA (United States); Castillo, Rafael [VaCom Technologies, San Luis Obispo, CA (United States); Larson, Kyle [VaCom Technologies, San Luis Obispo, CA (United States); Dobbs, Brian [VaCom Technologies, San Luis Obispo, CA (United States); Olsen, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-11-01

    This guide summarizes demand response measures that can be implemented in refrigerated warehouses. In an appendix, it also addresses related energy efficiency opportunities. Reducing overall grid demand during peak periods and energy consumption has benefits for facility operators, grid operators, utility companies, and society. State wide demand response potential for the refrigerated warehouse sector in California is estimated to be over 22.1 Megawatts. Two categories of demand response strategies are described in this guide: load shifting and load shedding. Load shifting can be accomplished via pre-cooling, capacity limiting, and battery charger load management. Load shedding can be achieved by lighting reduction, demand defrost and defrost termination, infiltration reduction, and shutting down miscellaneous equipment. Estimation of the costs and benefits of demand response participation yields simple payback periods of 2-4 years. To improve demand response performance, it’s suggested to install air curtains and another form of infiltration barrier, such as a rollup door, for the passageways. Further modifications to increase efficiency of the refrigeration unit are also analyzed. A larger condenser can maintain the minimum saturated condensing temperature (SCT) for more hours of the day. Lowering the SCT reduces the compressor lift, which results in an overall increase in refrigeration system capacity and energy efficiency. Another way of saving energy in refrigerated warehouses is eliminating the use of under-floor resistance heaters. A more energy efficient alternative to resistance heaters is to utilize the heat that is being rejected from the condenser through a heat exchanger. These energy efficiency measures improve efficiency either by reducing the required electric energy input for the refrigeration system, by helping to curtail the refrigeration load on the system, or by reducing both the load and required energy input.

  7. Logistics support economy and efficiency through consolidation and automation

    Science.gov (United States)

    Savage, G. R.; Fontana, C. J.; Custer, J. D.

    1985-01-01

    An integrated logistics support system, which would provide routine access to space and be cost-competitive as an operational space transportation system, was planned and implemented to support the NSTS program launch-on-time goal of 95 percent. A decision was made to centralize the Shuttle logistics functions in a modern facility that would provide office and training space and an efficient warehouse area. In this warehouse, the emphasis is on automation of the storage and retrieval function, while utilizing state-of-the-art warehousing and inventory management technology. This consolidation, together with the automation capabilities being provided, will allow for more effective utilization of personnel and improved responsiveness. In addition, this facility will be the prime support for the fully integrated logistics support of the operations era NSTS and reduce the program's management, procurement, transportation, and supply costs in the operations era.

  8. Development of global data warehouse for beam diagnostics at SSRF

    International Nuclear Information System (INIS)

    Lai Longwei; Leng Yongbin; Yan Yingbing; Chen Zhichu

    2015-01-01

    The beam diagnostic system is adequate during the daily operation and machine study at the Shanghai Synchrotron Radiation Facility (SSRF). Without the effective event detecting mechanism, it is difficult to dump and analyze abnormal phenomena such as the global orbital disturbance, the malfunction of the BPM and the noise of the DCCT. The global beam diagnostic data warehouse was built in order to monitor the status of the accelerator and the beam instruments. The data warehouse was designed as a Soft IOC hosted on an independent server. Once abnormal phenomena happen it will be triggered and will store the relevant data for further analysis. The results show that the data warehouse can detect abnormal phenomena of the machine and the beam diagnostic system effectively, and can be used for calculating confidential indicators of the beam instruments. It provides an efficient tool for the improvement of the beam diagnostic system and accelerator. (authors)

  9. Does laboratory automation for the preanalytical phase improve data quality?

    Science.gov (United States)

    Lima-Oliveira, Gabriel; Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Montagnana, Martina; Brocco, Giorgio; Voi, Monica; Picheth, Geraldo; Guidi, Gian Cesare

    2013-10-01

    Our aim was to evaluate whether automation for the preanalytical phase improves data quality. Blood from 100 volunteers was collected into two vacuum tubes. One sample from each volunteer was respectively assigned to (G1) traditional processing, starting with centrifugation at 1200 g for 10 min, and (G2) the MODULAR PRE-ANALYTICALS EVO-MPA system. The routine clinical chemistry tests were performed in duplicate on the same instrument Cobas 6000 module. G1 samples were uncapped manually and immediately placed into the instrument. G2 samples were directly fed from the MPA system to the instrument without further staff intervention. At the end, (1) the G1 samples were stored for 6 h at 4 °C as prescribed in our accredited laboratory and (2) the G2 samples were stored for 6 h in the MPA output buffer. Results from G1 and G2, before and after storage, were compared. Significant increases were observed in G1 compared with G2 samples as follows: (1) before storage for alkaline phosphatase (ALP), lactate dehydrogenase (LDH), phosphate (P), magnesium (MG), iron (FE), and hemolysis index and (2) after storage for total cholesterol (COL), triglycerides (TG), total protein (TP), albumin (ALB), blood urea nitrogen (BUN), creatinine (CRE), uric acid (UA), ALP, pancreatic amylase, aspartate aminotransferase (AST), alanine aminotransferase (ALT), g-glutamyltransferase (GGT), LDH, creatine kinase (CK), calcium (CA), FE, sodium (NA), potassium (K), and hemolysis index. Moreover, significant increases were observed in (3) G1-after versus G1-before storage samples for COL, high-density lipoprotein cholesterol, TG, TP, ALB, BUN, CRE, UA, AST, ALT, GGT, LDH, P, CA, MG, FE, NA, K, and hemolysis index and (4) G2-after versus G2-before storage only for BUN, AST, LDH, P, and CA. In conclusion, our results show that the MPA system improves the quality of laboratory testing.

  10. Automation of P-3 Simulations to Improve Operator Workload

    Science.gov (United States)

    2012-09-01

    Training GBE Group Behavior Engine GCC Geocentric Coordinates GCS Global Coordinate System GUI Graphical User Interface xiv HLA High...this thesis and because they each have a unique approach to solving the problem of entity behavior automation. A. DISCOVERY MACHINE The United States...from the operators and can be automated in JSAF using the mental simulation approach . Two trips were conducted to visit the Naval Warfare

  11. Combining prior day contours to improve automated prostate segmentation

    International Nuclear Information System (INIS)

    Godley, Andrew; Sheplan Olsen, Lawrence J.; Stephans, Kevin; Zhao Anzi

    2013-01-01

    Purpose: To improve the accuracy of automatically segmented prostate, rectum, and bladder contours required for online adaptive therapy. The contouring accuracy on the current image guidance [image guided radiation therapy (IGRT)] scan is improved by combining contours from earlier IGRT scans via the simultaneous truth and performance level estimation (STAPLE) algorithm. Methods: Six IGRT prostate patients treated with daily kilo-voltage (kV) cone-beam CT (CBCT) had their original plan CT and nine CBCTs contoured by the same physician. Three types of automated contours were produced for analysis. (1) Plan: By deformably registering the plan CT to each CBCT and then using the resulting deformation field to morph the plan contours to match the CBCT anatomy. (2) Previous: The contour set drawn by the physician on the previous day CBCT is similarly deformed to match the current CBCT anatomy. (3) STAPLE: The contours drawn by the physician, on each prior CBCT and the plan CT, are deformed to match the CBCT anatomy to produce multiple contour sets. These sets are combined using the STAPLE algorithm into one optimal set. Results: Compared to plan and previous, STAPLE improved the average Dice's coefficient (DC) with the original physician drawn CBCT contours to a DC as follows: Bladder: 0.81 ± 0.13, 0.91 ± 0.06, and 0.92 ± 0.06; Prostate: 0.75 ± 0.08, 0.82 ± 0.05, and 0.84 ± 0.05; and Rectum: 0.79 ± 0.06, 0.81 ± 0.06, and 0.85 ± 0.04, respectively. The STAPLE results are within intraobserver consistency, determined by the physician blindly recontouring a subset of CBCTs. Comparing plans recalculated using the physician and STAPLE contours showed an average disagreement less than 1% for prostate D98 and mean dose, and 5% and 3% for bladder and rectum mean dose, respectively. One scan takes an average of 19 s to contour. Using five scans plus STAPLE takes less than 110 s on a 288 core graphics processor unit. Conclusions: Combining the plan and all prior days via

  12. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  13. Expected Improvements in Work Truck Efficiency Through Connectivity and Automation

    Energy Technology Data Exchange (ETDEWEB)

    Walkowicz, Kevin A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-03-12

    This presentation focuses on the potential impact of connected and automated technologies on commercial vehicle operations. It includes topics such as the U.S. Department of Energy's Energy Efficient Mobility Systems (EEMS) program and the Systems and Modeling for Accelerated Research in Transportation (SMART) Mobility Initiative. It also describes National Renewable Energy Laboratory (NREL) research findings pertaining to the potential energy impacts of connectivity and automation and stresses the need for integration and optimization to take advantage of the benefits offered by these transformative technologies while mitigating the potential negative consequences.

  14. A Million Cancer Genome Warehouse

    Science.gov (United States)

    2012-11-20

    of a national program for Cancer Information Donors, the American Society for Clinical Oncology (ASCO) has proposed a rapid learning system for...or Scala and Spark; “scrum” organization of small programming teams; calculating “velocity” to predict time to develop new features; and Agile...2012 to 00-00-2012 4. TITLE AND SUBTITLE A Million Cancer Genome Warehouse 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  15. Improvement of Computer Software Quality through Software Automated Tools.

    Science.gov (United States)

    1986-08-30

    information that are returned from the tools to the human user, and the forms in which these outputs are presented. Page 2 of 4 STAGE OF DEVELOPMENT: What... AUTOMIATED SOFTWARE TOOL MONITORING SYSTEM APPENDIX 2 2-1 INTRODUCTION This document and Automated Software Tool Monitoring Program (Appendix 1) are...t Output Output features provide links from the tool to both the human user and the target machine (where applicable). They describe the types

  16. Improving Automated Lexical and Discourse Analysis of Online Chat Dialog

    Science.gov (United States)

    2007-09-01

    chatbots ”. Chatbots are automated user software independent of the chat room system that assist human participants, provide entertainment to the chat...both the chat room system and chatbots as well as information provided by the system and chatbots were often preceded by either the token “.” or...personal chatbots . Finally, we also classified chatbot responses as system dialog acts. The Yes/No Question chat dialog act is simply a question that

  17. Creation of Warehouse Models for Different Layout Designs

    OpenAIRE

    Köhler, Mirko; Lukić, Ivica; Nenadić, Krešimir

    2014-01-01

    Warehouse is one of the most important components in logistics of the supply chain network. Efficiency of warehouse operations is influenced by many different factors. One of the key factors is the racks layout configuration. A warehouse with good racks layout may significantly reduce the cost of warehouse servicing. The objective of this paper is to give a scheme for building warehouses models with one-block and two-block layout for future research in warehouse optimization. An algorithm ...

  18. Improving patient safety via automated laboratory-based adverse event grading.

    Science.gov (United States)

    Niland, Joyce C; Stiller, Tracey; Neat, Jennifer; Londrc, Adina; Johnson, Dina; Pannoni, Susan

    2012-01-01

    The identification and grading of adverse events (AEs) during the conduct of clinical trials is a labor-intensive and error-prone process. This paper describes and evaluates a software tool developed by City of Hope to automate complex algorithms to assess laboratory results and identify and grade AEs. We compared AEs identified by the automated system with those previously assessed manually, to evaluate missed/misgraded AEs. We also conducted a prospective paired time assessment of automated versus manual AE assessment. We found a substantial improvement in accuracy/completeness with the automated grading tool, which identified an additional 17% of severe grade 3-4 AEs that had been missed/misgraded manually. The automated system also provided an average time saving of 5.5 min per treatment course. With 400 ongoing treatment trials at City of Hope and an average of 1800 laboratory results requiring assessment per study, the implications of these findings for patient safety are enormous.

  19. Automation of cDNA Synthesis and Labelling Improves Reproducibility

    Directory of Open Access Journals (Sweden)

    Daniel Klevebring

    2009-01-01

    Full Text Available Background. Several technologies, such as in-depth sequencing and microarrays, enable large-scale interrogation of genomes and transcriptomes. In this study, we asses reproducibility and throughput by moving all laboratory procedures to a robotic workstation, capable of handling superparamagnetic beads. Here, we describe a fully automated procedure for cDNA synthesis and labelling for microarrays, where the purification steps prior to and after labelling are based on precipitation of DNA on carboxylic acid-coated paramagnetic beads. Results. The fully automated procedure allows for samples arrayed on a microtiter plate to be processed in parallel without manual intervention and ensuring high reproducibility. We compare our results to a manual sample preparation procedure and, in addition, use a comprehensive reference dataset to show that the protocol described performs better than similar manual procedures. Conclusions. We demonstrate, in an automated gene expression microarray experiment, a reduced variance between replicates, resulting in an increase in the statistical power to detect differentially expressed genes, thus allowing smaller differences between samples to be identified. This protocol can with minor modifications be used to create cDNA libraries for other applications such as in-depth analysis using next-generation sequencing technologies.

  20. Improving Usefulness of Automated Driving by Lowering Primary Task Interference through HMI Design

    Directory of Open Access Journals (Sweden)

    Frederik Naujoks

    2017-01-01

    Full Text Available During conditionally automated driving (CAD, driving time can be used for non-driving-related tasks (NDRTs. To increase safety and comfort of an automated ride, upcoming automated manoeuvres such as lane changes or speed adaptations may be communicated to the driver. However, as the driver’s primary task consists of performing NDRTs, they might prefer to be informed in a nondistracting way. In this paper, the potential of using speech output to improve human-automation interaction is explored. A sample of 17 participants completed different situations which involved communication between the automation and the driver in a motion-based driving simulator. The Human-Machine Interface (HMI of the automated driving system consisted of a visual-auditory HMI with either generic auditory feedback (i.e., standard information tones or additional speech output. The drivers were asked to perform a common NDRT during the drive. Compared to generic auditory output, communicating upcoming automated manoeuvres additionally by speech led to a decrease in self-reported visual workload and decreased monitoring of the visual HMI. However, interruptions of the NDRT were not affected by additional speech output. Participants clearly favoured the HMI with additional speech-based output, demonstrating the potential of speech to enhance usefulness and acceptance of automated vehicles.

  1. Worldwide Warehouse: A Customer Perspective

    Science.gov (United States)

    1994-09-01

    Management Office (PMO) and the customers (returnees and buyers) 23 will be developed or adapted from existing software programs. The hardware could be... customer requirements and desires is the first aspect to be approached. Sections 4.7 to 4.11 were dedicated to inivestigate those relationships and...R x NTIS CRA&I DTIC TAB WORLDWIDE WAREHOUSE: Ju’a-noj1c0[ed 0 A CUSTOMER PERSPECTIVE J-f-c-.tion .......... THESIS By D i s ib , tio

  2. Information Architecture: The Data Warehouse Foundation.

    Science.gov (United States)

    Thomas, Charles R.

    1997-01-01

    Colleges and universities are initiating data warehouse projects to provide integrated information for planning and reporting purposes. A survey of 40 institutions with active data warehouse projects reveals the kinds of tools, contents, data cycles, and access currently used. Essential elements of an integrated information architecture are…

  3. Integrating Data Warehouses with Web Data

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query and retrieve web data, and their application to data warehouses. The paper addresses the problem of integrating...

  4. Promotion bureau warehouse system design. Case study in University of AA

    Science.gov (United States)

    Parwati, N.; Qibtiyah, M.

    2017-12-01

    The warehouse becomes one of the important parts in an industry. By having a good warehousing system, an industry can improve the effectiveness of its performance, so that profits for the company can continue to increase. Meanwhile, if it has a poorly organized warehouse system, it is feared there will be a decrease in the level of effectiveness of the industry itself. In this research, the object was warehousing system in promotion bureau of University AA. To improve the effectiveness of warehousing system, warehouse layout design is done by specifying categories of goods based on the flow of goods in and out of warehouse with ABC analysis method. In addition, the design of information systems to assist in controlling the system to support all the demand for every burreau and department in the university.

  5. DICOM Data Warehouse: Part 2.

    Science.gov (United States)

    Langer, Steve G

    2016-06-01

    In 2010, the DICOM Data Warehouse (DDW) was launched as a data warehouse for DICOM meta-data. Its chief design goals were to have a flexible database schema that enabled it to index standard patient and study information, modality specific tags (public and private), and create a framework to derive computable information (derived tags) from the former items. Furthermore, it was to map the above information to an internally standard lexicon that enables a non-DICOM savvy programmer to write standard SQL queries and retrieve the equivalent data from a cohort of scanners, regardless of what tag that data element was found in over the changing epochs of DICOM and ensuing migration of elements from private to public tags. After 5 years, the original design has scaled astonishingly well. Very little has changed in the database schema. The knowledge base is now fluent in over 90 device types. Also, additional stored procedures have been written to compute data that is derivable from standard or mapped tags. Finally, an early concern is that the system would not be able to address the variability DICOM-SR objects has been addressed. As of this writing the system is indexing 300 MR, 600 CT, and 2000 other (XA, DR, CR, MG) imaging studies per day. The only remaining issue to be solved is the case for tags that were not prospectively indexed-and indeed, this final challenge may lead to a noSQL, big data, approach in a subsequent version.

  6. How smart is your BEOL? productivity improvement through intelligent automation

    Science.gov (United States)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony

    2017-07-01

    The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To

  7. Ontology-Based Big Dimension Modeling in Data Warehouse Schema Design

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem

    2013-01-01

    During data warehouse schema design, designers often encounter how to model big dimensions that typically contain a large number of attributes and records. To investigate effective approaches for modeling big dimensions is necessary in order to achieve better query performance, with respect...... partitioning, vertical partitioning and their hybrid. We formalize the design methods and propose an algorithm that describes the modeling process from an OWL ontology to a data warehouse schema. In addition, this paper also presents an effective ontology-based tool to automate the modeling process. The tool...... can automatically generate the data warehouse schema from the ontology of describing the terms and business semantics for the big dimension. In case of any change in the requirements, we only need to modify the ontology, and re-generate the schema using the tool. This paper also evaluates the proposed...

  8. Model Data Warehouse dan Business Intelligence untuk Meningkatkan Penjualan pada PT. S

    Directory of Open Access Journals (Sweden)

    Rudy Rudy

    2011-06-01

    Full Text Available Today a lot of companies use information system in every business activity. Every transaction is stored electronically in the database transaction. The transactional database does not help much to assist the executives in making strategic decisions to improve the company competitiveness. The objective of this research is to analyze the operational database system and the information needed by the management to design a data warehouse model which fits the executive information needs in PT. S. The research method uses the Nine-Step Methodology data warehouse design by Ralph Kimball. The result is a data warehouse featuring business intelligence applications to display information of historical data in tables, graphs, pivot tables, and dashboards and has several points of view for the management. This research concludes that a data warehouse which combines multiple database transactions with business intelligence application can help executives to understand the reports in order to accelerate decision-making processes. 

  9. SU-D-BRD-03: Improving Plan Quality with Automation of Treatment Plan Checks

    International Nuclear Information System (INIS)

    Covington, E; Younge, K; Chen, X; Lee, C; Matuszak, M; Kessler, M; Acosta, E; Orow, A; Filpansick, S; Moran, J; Keranen, W

    2015-01-01

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One example is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827

  10. SU-D-BRD-03: Improving Plan Quality with Automation of Treatment Plan Checks

    Energy Technology Data Exchange (ETDEWEB)

    Covington, E; Younge, K; Chen, X; Lee, C; Matuszak, M; Kessler, M; Acosta, E; Orow, A; Filpansick, S; Moran, J [University of Michigan Hospital and Health System, Ann Arbor, MI (United States); Keranen, W [Varian Medical Systems, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One example is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827.

  11. Web-enabled Data Warehouse and Data Webhouse

    Directory of Open Access Journals (Sweden)

    Cerasela PIRVU

    2008-01-01

    Full Text Available In this paper, our objectives are to understanding what data warehouse means examine the reasons for doing so, appreciate the implications of the convergence of Web technologies and those of the data warehouse and examine the steps for building a Web-enabled data warehouse. The web revolution has propelled the data warehouse out onto the main stage, because in many situations the data warehouse must be the engine that controls or analysis the web experience. In order to step up to this new responsibility, the data warehouse must adjust. The nature of the data warehouse needs to be somewhat different. As a result, our data warehouses are becoming data webhouses. The data warehouse is becoming the infrastructure that supports customer relationship management (CRM. And the data warehouse is being asked to make the customer clickstream available for analysis. This rebirth of data warehousing architecture is called the data webhouse.

  12. Subcritical calculation of the nuclear material warehouse

    International Nuclear Information System (INIS)

    Garcia M, T.; Mazon R, R.

    2009-01-01

    In this work the subcritical calculation of the nuclear material warehouse of the Reactor TRIGA Mark III labyrinth in the Mexico Nuclear Center is presented. During the adaptation of the nuclear warehouse (vault I), the fuel was temporarily changed to the warehouse (vault II) and it was also carried out the subcritical calculation for this temporary arrangement. The code used for the calculation of the effective multiplication factor, it was the Monte Carlo N-Particle Extended code known as MCNPX, developed by the National Laboratory of Los Alamos, for the particles transport. (Author)

  13. Building a Data Warehouse step by step

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouses have been developed to answer the increasing demands of quality information required by the top managers and economic analysts of organizations. Their importance in now a day business area is unanimous recognized, being the foundation for developing business intelligence systems. Data warehouses offer support for decision-making process, allowing complex analyses which cannot be properly achieved from operational systems. This paper presents the ways in which a data warehouse may be developed and the stages of building it.

  14. INNOVATIVE DEVELOPMENT OF WAREHOUSE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Judit OLÁH

    2017-12-01

    Full Text Available The smooth operation of stocking and the warehouse play a very important role in all manufacturing companies; therefore ongoing monitoring and application of new techniques is essential to increase efficiency. The aim of our research is twofold: the utilization of the pallet shuttle racking system, and the introduction of a development opportunity by the merging of storage and order picking operations in the pallet shuttle system. It can be concluded that it is beneficial for the company to purchase two mobile cars in order to increase the utilization of the pallet shuttle racking system from 60% to 72% and that of the storage from 74% to 76%. We established that after the merging of the storage and order picking activities within the pallet shuttle system, the forklift driver can also complete the selection activities immediately after storage. By merging the two operations and saving time the number of forklift drivers can be reduced from 4 to 3 per shift.

  15. Análisis de rendimiento académico estudiantil usando data warehouse y redes neuronales Analysis of students' academic performance using data warehouse and neural networks

    Directory of Open Access Journals (Sweden)

    Carolina Zambrano Matamala

    2011-12-01

    Full Text Available Cada día las organizaciones tienen más información porque sus sistemas producen una gran cantidad de operaciones diarias que se almacenan en bases de datos transaccionales. Con el fin de analizar esta información histórica, una alternativa interesante es implementar un Data Warehouse. Por otro lado, los Data Warehouse no son capaces de realizar un análisis predictivo por sí mismos, pero las técnicas de inteligencia de máquinas se pueden utilizar para clasificar, agrupar y predecir en base a información histórica con el fin de mejorar la calidad del análisis. En este trabajo se describe una arquitectura de Data Warehouse con el fin de realizar un análisis del desempeño académico de los estudiantes. El Data Warehouse es utilizado como entrada de una arquitectura de red neuronal con tal de analizar la información histórica y de tendencia en el tiempo. Los resultados muestran la viabilidad de utilizar un Data Warehouse para el análisis de rendimiento académico y la posibilidad de predecir el número de asignaturas aprobadas por los estudiantes usando solamente su propia información histórica.Every day organizations have more information because their systems produce a large amount of daily operations which are stored in transactional databases. In order to analyze this historical information, an interesting alternative is to implement a Data Warehouse. In the other hand, Data Warehouses are not able to perform predictive analysis for themselves, but machine learning techniques can be used to classify, grouping and predict historical information in order to improve the quality of analysis. This paper depicts architecture of a Data Warehouse useful to perform an analysis of students' academic performance. The Data Warehouse is used as input of a Neural Network in order to analyze historical information and forecast. The results show the viability of using Data Warehouse for academic performance analysis and the feasibility of

  16. Evaluation of an improved technique for automated center lumen line definition in cardiovascular image data

    International Nuclear Information System (INIS)

    Gratama van Andel, Hugo A.F.; Meijering, Erik; Vrooman, Henri A.; Stokking, Rik; Lugt, Aad van der; Monye, Cecile de

    2006-01-01

    The aim of the study was to evaluate a new method for automated definition of a center lumen line in vessels in cardiovascular image data. This method, called VAMPIRE, is based on improved detection of vessel-like structures. A multiobserver evaluation study was conducted involving 40 tracings in clinical CTA data of carotid arteries to compare VAMPIRE with an established technique. This comparison showed that VAMPIRE yields considerably more successful tracings and improved handling of stenosis, calcifications, multiple vessels, and nearby bone structures. We conclude that VAMPIRE is highly suitable for automated definition of center lumen lines in vessels in cardiovascular image data. (orig.)

  17. Improving reticle defect disposition via fully automated lithography simulation

    Science.gov (United States)

    Mann, Raunak; Goodman, Eliot; Lao, Keith; Ha, Steven; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan

    2016-03-01

    Most advanced wafer fabs have embraced complex pattern decoration, which creates numerous challenges during in-fab reticle qualification. These optical proximity correction (OPC) techniques create assist features that tend to be very close in size and shape to the main patterns as seen in Figure 1. A small defect on an assist feature will most likely have little or no impact on the fidelity of the wafer image, whereas the same defect on a main feature could significantly decrease device functionality. In order to properly disposition these defects, reticle inspection technicians need an efficient method that automatically separates main from assist features and predicts the resulting defect impact on the wafer image. Analysis System (ADAS) defect simulation system[1]. Up until now, using ADAS simulation was limited to engineers due to the complexity of the settings that need to be manually entered in order to create an accurate result. A single error in entering one of these values can cause erroneous results, therefore full automation is necessary. In this study, we propose a new method where all needed simulation parameters are automatically loaded into ADAS. This is accomplished in two parts. First we have created a scanner parameter database that is automatically identified from mask product and level names. Second, we automatically determine the appropriate simulation printability threshold by using a new reference image (provided by the inspection tool) that contains a known measured value of the reticle critical dimension (CD). This new method automatically loads the correct scanner conditions, sets the appropriate simulation threshold, and automatically measures the percentage of CD change caused by the defect. This streamlines qualification and reduces the number of reticles being put on hold, waiting for engineer review. We also present data showing the consistency and reliability of the new method, along with the impact on the efficiency of in

  18. Konsolidasi Data Warehouse untuk Aplikasi Business Intelligence

    Directory of Open Access Journals (Sweden)

    Rudy Rudy

    2012-12-01

    Full Text Available As the business competition is getting strong, corporate leaders need complete data that as a basis for determining future business strategies. Similarly with management of company "A", a pharmaceutical company which has three distribution companies. Each distribution company already has a data warehouse to generate reports for each of them. For business operational and corporate strategies, chairman PT "A" requires an integrated report, so analysis of data owned by the three distribution companies can be done in a full reportto answer the problems faced by the managemet. Thus, data warehouse consilidation can be used as a solution for company "A". Methodology starts with analysis of information needs to be displayed on the application ofbusiness intelligence, data warehouse consolidation, ETL (extract, transform and load, data warehousing, OLAP and Dashboard. Using data warehouse consolidation, information access by management of company "A" can be done in a single presentation, which can display data comparison between the three distribution companies.

  19. Protecting privacy in a clinical data warehouse.

    Science.gov (United States)

    Kong, Guilan; Xiao, Zhichun

    2015-06-01

    Peking University has several prestigious teaching hospitals in China. To make secondary use of massive medical data for research purposes, construction of a clinical data warehouse is imperative in Peking University. However, a big concern for clinical data warehouse construction is how to protect patient privacy. In this project, we propose to use a combination of symmetric block ciphers, asymmetric ciphers, and cryptographic hashing algorithms to protect patient privacy information. The novelty of our privacy protection approach lies in message-level data encryption, the key caching system, and the cryptographic key management system. The proposed privacy protection approach is scalable to clinical data warehouse construction with any size of medical data. With the composite privacy protection approach, the clinical data warehouse can be secure enough to keep the confidential data from leaking to the outside world. © The Author(s) 2014.

  20. Data Warehouse Architecture for Army Installations

    National Research Council Canada - National Science Library

    Reddy, Prameela

    1999-01-01

    .... A data warehouse is a single store of information to answer complex queries from management using cross-functional data to perform advanced data analysis methods and to compare with historical data...

  1. Designing a Data Warehouse for Cyber Crimes

    Directory of Open Access Journals (Sweden)

    Il-Yeol Song

    2006-09-01

    Full Text Available One of the greatest challenges facing modern society is the rising tide of cyber crimes. These crimes, since they rarely fit the model of conventional crimes, are difficult to investigate, hard to analyze, and difficult to prosecute. Collecting data in a unified framework is a mandatory step that will assist the investigator in sorting through the mountains of data. In this paper, we explore designing a dimensional model for a data warehouse that can be used in analyzing cyber crime data. We also present some interesting queries and the types of cyber crime analyses that can be performed based on the data warehouse. We discuss several ways of utilizing the data warehouse using OLAP and data mining technologies. We finally discuss legal issues and data population issues for the data warehouse.

  2. Data Warehouse Discovery Framework: The Foundation

    Science.gov (United States)

    Apanowicz, Cas

    The cost of building an Enterprise Data Warehouse Environment runs usually in millions of dollars and takes years to complete. The cost, as big as it is, is not the primary problem for a given corporation. The risk that all money allocated for planning, design and implementation of the Data Warehouse and Business Intelligence Environment may not bring the result expected, fare out way the cost of entire effort [2,10]. The combination of the two above factors is the main reason that Data Warehouse/Business Intelligence is often single most expensive and most risky IT endeavor for companies [13]. That situation was the main author's inspiration behind founding of Infobright Corp and later on the concept of Data Warehouse Discovery Framework.

  3. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  4. Improving automated disturbance maps using snow-covered landsat time series stacks

    Science.gov (United States)

    Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen

    2012-01-01

    Snow-covered winter Landsat time series stacks are used to develop a nonforest mask to enhance automated disturbance maps produced by the Vegetation Change Tracker (VCT). This method exploits the enhanced spectral separability between forested and nonforested areas that occurs with sufficient snow cover. This method resulted in significant improvements in Vegetation...

  5. Improved production operating efficiencies through automation: Wascana Energy`s SCADA system implementation in southeast Saskatchewan

    Energy Technology Data Exchange (ETDEWEB)

    Knudsen, R; Foord, T; Bartle, A

    1996-12-31

    Supervisory control and data acquisition (SCADA) systems covering Wascana Energy`s whole southeast Saskatchewan operating area were implemented in 1994-95. The benefits of this automation were described. Operations practices were reviewed and a brief description of the system was provided. Main features of the system described included data storage/retrieval, data display, alarm group organization, alarm call out monitoring, dynagraph display, and the Microsoft SQL server computer. Automation was found to significantly change the operator`s traditional role and altered operation practices in general. SCADA systems were found to improve operating efficiencies and production performance significantly, when properly implemented and utilized. 6 refs., 3 figs.

  6. Improving Automation Routines for Automatic Heating Load Detection in Buildings

    Directory of Open Access Journals (Sweden)

    Stephen Timlin

    2012-11-01

    Full Text Available Energy managers use weather compensation data and heating system cut off routines to reduce heating energy consumption in buildings and improve user comfort. These routines are traditionally based on the calculation of an estimated building load that is inferred from the external dry bulb temperature at any point in time. While this method does reduce heating energy consumption and accidental overheating, it can be inaccurate under some weather conditions and therefore has limited effectiveness. There remains considerable scope to improve on the accuracy and relevance of the traditional method by expanding the calculations used to include a larger range of environmental metrics. It is proposed that weather compensation and automatic shut off routines that are commonly used could be improved notably with little additional cost by the inclusion of additional weather metrics. This paper examines the theoretical relationship between various external metrics and building heating loads. Results of the application of an advanced routine to a recently constructed building are examined, and estimates are made of the potential savings that can be achieved through the use of the routines proposed.

  7. [Peranesthesic Anaphylactic Shocks: Contribution of a Clinical Data Warehouse].

    Science.gov (United States)

    Osmont, Marie-Noëlle; Campillo-Gimenez, Boris; Metayer, Lucie; Jantzem, Hélène; Rochefort-Morel, Cécile; Cuggia, Marc; Polard, Elisabeth

    2015-10-16

    To evaluate the performance of the collection of cases of anaphylactic shock during anesthesia in the Regional Pharmacovigilance Center of Rennes and the contribution of a query in the biomedical data warehouse of the French University Hospital of Rennes in 2009. Different sources were evaluated: the French pharmacovigilance database (including spontaneous reports and reports from a query in the database of the programme de médicalisation des systèmes d'information [PMSI]), records of patients seen in allergo-anesthesia (source considered as comprehensive as possible) and a query in the data warehouse. Analysis of allergo-anesthesia records detected all cases identified by other methods, as well as two other cases (nine cases in total). The query in the data warehouse enabled detection of seven cases out of the nine. Querying full-text reports and structured data extracted from the hospital information system improves the detection of anaphylaxis during anesthesia and facilitates access to data. © 2015 Société Française de Pharmacologie et de Thérapeutique.

  8. Improved automated perimetry performance in elderly subjects after listening to Mozart

    Directory of Open Access Journals (Sweden)

    Junia Cabral Marques

    2009-01-01

    Full Text Available PURPOSE: To evaluate the performance of automated perimetry of elderly subjects naïve to AP after listening to a Mozart sonata. INTRODUCTION: Automated perimetry (AP is a psychophysical test used to assess visual fields in patients with neurological disorders and glaucoma. In a previous study, Fiorelli et al. showed that young subjects who listened to a Mozart sonata prior to undergoing AP performed better in terms of reliability than those who did not listen to the sonata. METHODS: Fifty-two AP-naïve, normal subjects underwent Automated perimetry (SITA 24-2. The study group (25 subjects underwent AP after listening to Mozart's Sonata for Two Pianos in D Major, and the control group (27 subjects underwent Automated perimetry without prior exposure to the music. RESULTS: The study group had significantly lower false negative rates and a lower visual field reliability score than the controls (P=0.04 and P=0.04, respectively. The test time was shorter for the study group (P=0.03. DISCUSSION: This study shows that elderly subjects, when exposed to the Mozart sonata immediately before AP testing, have lower false negative rates and lower visual field reliability scores when compared with an age- and gender-matched control group. Our results differ from those of Fiorelli et al. who found lower false positive rates and less fixation loss in addition to lower false negative rates. CONCLUSION: Listening to a Mozart sonata seems to improve automated perimetry reliability in elderly subjects.

  9. Improved automated perimetry performance in elderly subjects after listening to Mozart.

    Science.gov (United States)

    Marques, Junia Cabral; Vanessa, Adriana Chaves Oliveira; Fiorelli, Macedo Batista; Kasahara, Niro

    2009-01-01

    To evaluate the performance of automated perimetry of elderly subjects naïve to AP after listening to a Mozart sonata. Automated perimetry (AP) is a psychophysical test used to assess visual fields in patients with neurological disorders and glaucoma. In a previous study, Fiorelli et al. showed that young subjects who listened to a Mozart sonata prior to undergoing AP performed better in terms of reliability than those who did not listen to the sonata. Fifty-two AP-naïve, normal subjects underwent Automated perimetry (SITA 24-2). The study group (25 subjects) underwent AP after listening to Mozart's Sonata for Two Pianos in D Major, and the control group (27 subjects) underwent Automated perimetry without prior exposure to the music. The study group had significantly lower false negative rates and a lower visual field reliability score than the controls (P=0.04 and P=0.04, respectively). The test time was shorter for the study group (P=0.03). This study shows that elderly subjects, when exposed to the Mozart sonata immediately before AP testing, have lower false negative rates and lower visual field reliability scores when compared with an age- and gender-matched control group. Our results differ from those of Fiorelli et al. who found lower false positive rates and less fixation loss in addition to lower false negative rates. Listening to a Mozart sonata seems to improve automated perimetry reliability in elderly subjects.

  10. Work prioritization by using data warehouse solution; Priorizacao de obras usando solucao de data warehouse

    Energy Technology Data Exchange (ETDEWEB)

    Grupelli Junior, Fernando Antonio; Azoni, Edivar Garcia [Companhia Paranaense de Energia (COPEL), Curitiba, PR (Brazil)

    2000-07-01

    This work proposes the utilization of data warehouse technology for helping of gathering adequate and reliable information, and allows the calculation of cost-benefits ratios of work in the distribution primary network. The paper also intends to suggest a better integration and the utilization of the possibility of a data warehouse and his future integration with a geo processing system.

  11. A methodology to determine the level of automation to improve the production process and reduce the ergonomics index

    Science.gov (United States)

    Chan-Amaya, Alejandro; Anaya-Pérez, María Elena; Benítez-Baltazar, Víctor Hugo

    2017-08-01

    Companies are constantly looking for improvements in productivity to increase their competitiveness. The use of automation technologies is a tool that have been proven to be effective to achieve this. There are companies that are not familiar with the process to acquire automation technologies, therefore, they abstain from investments and thereby miss the opportunity to take advantage of it. The present document proposes a methodology to determine the level of automation appropriate for the production process and thus minimize automation and improve production taking in consideration the ergonomics factor.

  12. Developing a standardized healthcare cost data warehouse.

    Science.gov (United States)

    Visscher, Sue L; Naessens, James M; Yawn, Barbara P; Reinalda, Megan S; Anderson, Stephanie S; Borah, Bijan J

    2017-06-12

    Research addressing value in healthcare requires a measure of cost. While there are many sources and types of cost data, each has strengths and weaknesses. Many researchers appear to create study-specific cost datasets, but the explanations of their costing methodologies are not always clear, causing their results to be difficult to interpret. Our solution, described in this paper, was to use widely accepted costing methodologies to create a service-level, standardized healthcare cost data warehouse from an institutional perspective that includes all professional and hospital-billed services for our patients. The warehouse is based on a National Institutes of Research-funded research infrastructure containing the linked health records and medical care administrative data of two healthcare providers and their affiliated hospitals. Since all patients are identified in the data warehouse, their costs can be linked to other systems and databases, such as electronic health records, tumor registries, and disease or treatment registries. We describe the two institutions' administrative source data; the reference files, which include Medicare fee schedules and cost reports; the process of creating standardized costs; and the warehouse structure. The costing algorithm can create inflation-adjusted standardized costs at the service line level for defined study cohorts on request. The resulting standardized costs contained in the data warehouse can be used to create detailed, bottom-up analyses of professional and facility costs of procedures, medical conditions, and patient care cycles without revealing business-sensitive information. After its creation, a standardized cost data warehouse is relatively easy to maintain and can be expanded to include data from other providers. Individual investigators who may not have sufficient knowledge about administrative data do not have to try to create their own standardized costs on a project-by-project basis because our data

  13. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  14. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    Science.gov (United States)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  15. Establishment of the Integrated Plant Data Warehouse

    International Nuclear Information System (INIS)

    Oota, Yoshimi; Yoshinaga, Toshiaki

    1999-01-01

    This paper presents 'The Establishment of the Integrated Plant Data Warehouse and Verification Tests on Inter-corporate Electronic Commerce based on the Data Warehouse (PDWH)', one of the 'Shared Infrastructure for the Electronic Commerce Consolidation Project', promoted by the Ministry of International Trade and Industry (MITI) through the Information-Technology Promotion Agency (IPA), Japan. A study group called Japan Plant EC (PlantEC) was organized to perform relevant activities. One of the main activities of plantEC involves the construction of the Integrated (including manufacturers, engineering companies, plant construction companies, and machinery and parts manufacturers, etc.) Data Warehouse which is an essential part of the infrastructure necessary for a system to share information on industrial life cycle ranging from planning/designing to operation/maintenance. Another activity is the utilization of this warehouse for the purpose of conducting verification tests to prove its usefulness. Through these verification tests, PlantEC will endeavor to establish a warehouse with standardized data which can be used for the infrastructure of EC in the process plant industry. (author)

  16. Establishment of the Integrated Plant Data Warehouse

    Energy Technology Data Exchange (ETDEWEB)

    Oota, Yoshimi; Yoshinaga, Toshiaki [Hitachi Works, Hitachi Ltd., hitachi, Ibaraki (Japan)

    1999-07-01

    This paper presents 'The Establishment of the Integrated Plant Data Warehouse and Verification Tests on Inter-corporate Electronic Commerce based on the Data Warehouse (PDWH)', one of the 'Shared Infrastructure for the Electronic Commerce Consolidation Project', promoted by the Ministry of International Trade and Industry (MITI) through the Information-Technology Promotion Agency (IPA), Japan. A study group called Japan Plant EC (PlantEC) was organized to perform relevant activities. One of the main activities of plantEC involves the construction of the Integrated (including manufacturers, engineering companies, plant construction companies, and machinery and parts manufacturers, etc.) Data Warehouse which is an essential part of the infrastructure necessary for a system to share information on industrial life cycle ranging from planning/designing to operation/maintenance. Another activity is the utilization of this warehouse for the purpose of conducting verification tests to prove its usefulness. Through these verification tests, PlantEC will endeavor to establish a warehouse with standardized data which can be used for the infrastructure of EC in the process plant industry. (author)

  17. An Advanced Pre-Processing Pipeline to Improve Automated Photogrammetric Reconstructions of Architectural Scenes

    Directory of Open Access Journals (Sweden)

    Marco Gaiani

    2016-02-01

    Full Text Available Automated image-based 3D reconstruction methods are more and more flooding our 3D modeling applications. Fully automated solutions give the impression that from a sample of randomly acquired images we can derive quite impressive visual 3D models. Although the level of automation is reaching very high standards, image quality is a fundamental pre-requisite to produce successful and photo-realistic 3D products, in particular when dealing with large datasets of images. This article presents an efficient pipeline based on color enhancement, image denoising, color-to-gray conversion and image content enrichment. The pipeline stems from an analysis of various state-of-the-art algorithms and aims to adjust the most promising methods, giving solutions to typical failure causes. The assessment evaluation proves how an effective image pre-processing, which considers the entire image dataset, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in the case of poor texture scenarios.

  18. Automated monitoring: a potential solution for achieving sustainable improvement in hand hygiene practices.

    Science.gov (United States)

    Levchenko, Alexander I; Boscart, Veronique M; Fernie, Geoff R

    2014-08-01

    Adequate hand hygiene is often considered as the most effective method of reducing the rates of hospital-acquired infections, which are one of the major causes of increased cost, morbidity, and mortality in healthcare. Electronic monitoring technologies provide a promising direction for achieving sustainable hand hygiene improvement by introducing the elements of automated feedback and creating the possibility to automatically collect individual hand hygiene performance data. The results of the multiphase testing of an automated hand hygiene reminding and monitoring system installed in a complex continuing care setting are presented. The study included a baseline Phase 1, with the system performing automated data collection only, a preintervention Phase 2 with hand hygiene status indicator enabled, two intervention Phases 3 and 4 with the system generating hand hygiene reminding signals and periodic performance feedback sessions provided, and a postintervention Phase 5 with only hand hygiene status indicator enabled and no feedback sessions provided. A significant increase in hand hygiene performance observed during the first intervention Phase 3 was sustained over the second intervention Phase 4, with the postintervention phase also indicating higher hand hygiene activity rates compared with the preintervention and baseline phases. The overall trends observed during the multiphase testing, the factors affecting acceptability of the automated hand hygiene monitoring system, and various strategies of technology deployment are discussed.

  19. An automated hand hygiene training system improves hand hygiene technique but not compliance.

    Science.gov (United States)

    Kwok, Yen Lee Angela; Callard, Michelle; McLaws, Mary-Louise

    2015-08-01

    The hand hygiene technique that the World Health Organization recommends for cleansing hands with soap and water or alcohol-based handrub consists of 7 poses. We used an automated training system to improve clinicians' hand hygiene technique and test whether this affected hospitalwide hand hygiene compliance. Seven hundred eighty-nine medical and nursing staff volunteered to participate in a self-directed training session using the automated training system. The proportion of successful first attempts was reported for each of the 7 poses. Hand hygiene compliance was collected according to the national requirement and rates for 2011-2014 were used to determine the effect of the training system on compliance. The highest pass rate was for pose 1 (palm to palm) at 77% (606 out of 789), whereas pose 6 (clean thumbs) had the lowest pass rate at 27% (216 out of 789). One hundred volunteers provided feedback to 8 items related to satisfaction with the automated training system and most (86%) expressed a high degree of satisfaction and all reported that this method was time-efficient. There was no significant change in compliance rates after the introduction of the automated training system. Observed compliance during the posttraining period declined but increased to 82% in response to other strategies. Technology for training clinicians in the 7 poses played an important education role but did not affect compliance rates. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  20. Harvesting Information from a Library Data Warehouse

    Directory of Open Access Journals (Sweden)

    Siew-Phek T. Su

    2017-09-01

    Full Text Available Data warehousing technology has been defined by John Ladley as "a set of methods, techniques, and tools that are leveraged together and used to produce a vehicle that delivers data to end users on an integrated platform." (1 This concept h s been applied increasingly by industries worldwide to develop data warehouses for decision support and knowledge discovery. In the academic sector, several universities have developed data warehouses containing the universities' financial, payroll, personnel, budget, and student data. (2 These data warehouses across all industries and academia have met with varying degrees of success. Data warehousing technology and its related issues have been widely discussed and published. (3 Little has been done, however, on the application of this cutting edge technology in the library environment using library data.

  1. Data warehouse til elbilers opladning og elpriser

    DEFF Research Database (Denmark)

    Andersen, Ove; Krogh, Benjamin Bjerre; Torp, Kristian

    Denne rapport præsenterer, hvordan GPS og CAN bus målinger fra opladning af elbilerne er renset for typiske fejl og gemt i et data warehouse. GPS og CAN bus målingerne er i data warehouset integreret med priserne fra det Nordeuropæiske el spotmarked Nord Pool Spot. Denne integration muliggør...... målinger om opladningen af elbiler er sammen med priserne fra el spotmarkedet indlæst i et data warehouse, som er fuldt ud implementeret. Den logiske data model for dette data warehouse præsenteres i detaljer. Håndteringen af GPS og CAN bus målingerne er generisk og kan udvides til nye data kilder...

  2. Order Picking Process in Warehouse: Case Study of Dairy Industry in Croatia

    Directory of Open Access Journals (Sweden)

    Josip Habazin

    2017-02-01

    Full Text Available The proper functioning of warehouse processes is fundamental for operational improvement and overall logistic supply chain improvement. Order picking is considered one of the most important from the group. Throughout picking orders in warehouses, the presence of human work is highly reflected, with the main goal to reduce the process time as much as possible, that is, to the very minimum. There are several different order picking methods, and nowadays, the most common ones are being developed and are significantly dependent on the type of goods, the warehouse equipment, etc., and those that stand out are scanning and picking by voice. This paper will provide information regarding the dairy industry in the Republic of Croatia with the analysis of order picking process in the observed company. Overall research highlighted the problem and resulted in proposals of solutions.

  3. Clinical Data Warehouse: An Effective Tool to Create Intelligence in Disease Management.

    Science.gov (United States)

    Karami, Mahtab; Rahimi, Azin; Shahmirzadi, Ali Hosseini

    Clinical business intelligence tools such as clinical data warehouse enable health care organizations to objectively assess the disease management programs that affect the quality of patients' life and well-being in public. The purpose of these programs is to reduce disease occurrence, improve patient care, and decrease health care costs. Therefore, applying clinical data warehouse can be effective in generating useful information about aspects of patient care to facilitate budgeting, planning, research, process improvement, external reporting, benchmarking, and trend analysis, as well as to enable the decisions needed to prevent the progression or appearance of the illness aligning with maintaining the health of the population. The aim of this review article is to describe the benefits of clinical data warehouse applications in creating intelligence for disease management programs.

  4. Development of a medical informatics data warehouse.

    Science.gov (United States)

    Wu, Cai

    2006-01-01

    This project built a medical informatics data warehouse (MedInfo DDW) in an Oracle database to analyze medical information which has been collected through Baylor Family Medicine Clinic (FCM) Logician application. The MedInfo DDW used Star Schema with dimensional model, FCM database as operational data store (ODS); the data from on-line transaction processing (OLTP) were extracted and transferred to a knowledge based data warehouse through SQLLoad, and the patient information was analyzed by using on-line analytic processing (OLAP) in Crystal Report.

  5. What Academia Can Gain from Building a Data Warehouse.

    Science.gov (United States)

    Wierschem, David; McMillen, Jeremy; McBroom, Randy

    2003-01-01

    Describes how, when used effectively, data warehouses can be a significant component of strategic decision making on campus. Discusses what a data warehouse is and what its informational contents may include, environmental drivers and obstacles, and strategies to justify developing a data warehouse for an academic institution. (EV)

  6. 7 CFR 735.302 - Paper warehouse receipts.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Paper warehouse receipts. 735.302 Section 735.302... § 735.302 Paper warehouse receipts. Paper warehouse receipts must be issued as follows: (a) On distinctive paper specified by DACO; (b) Printed by a printer authorized by DACO; and (c) Issued, identified...

  7. Nigerian Concept Of Bonded Warehouses And Dry Ports | Ndikom ...

    African Journals Online (AJOL)

    The bonded warehouse in Nigeria is a strategic expansion of ordinary warehouses that are usually developed in the ports and related cities, strictly meant for safe-keeping of cargoes for owners before final take-over by consignees after payment of some customs duties. A bonded warehouse and ICDs are seen as ...

  8. 27 CFR 24.141 - Bonded wine warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Bonded wine warehouse. 24..., DEPARTMENT OF THE TREASURY LIQUORS WINE Establishment and Operations Permanent Discontinuance of Operations § 24.141 Bonded wine warehouse. Where all operations at a bonded wine warehouse are to be permanently...

  9. Improved automated lumen contour detection by novel multifrequency processing algorithm with current intravascular ultrasound system.

    Science.gov (United States)

    Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro

    2013-02-01

    The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.

  10. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    Science.gov (United States)

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-11-06

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.

  11. Design Criteria in Revitalizing Old Warehouse District on the Kalimas Riverbank Area of Surabaya City

    Directory of Open Access Journals (Sweden)

    Endang Titi Sunarti Darjosanjoto

    2015-09-01

    Full Text Available Neglected warehouse buildings along the Kalimas River have created a poor urban façade in terms of visual quality. However the city government is planning to encourage tourism activities that take advantage of Kalimas River and its surrounding environment. If there is no good plan in accordance with the concept of local identity for old city of Surabaya, it will reduce it as a tourist attraction. In reference to the issue above, design criteria needs to be compiled for revitalizing the old warehouse district, which is expected to revive the identity of this district and be able to support the city’s tourism. This study was conducted by recording field observations, and the data was analyzed using the character appraisal method. The character appraisal analysis method is presented in the form of street picture data, which is divided into determined segments. The results show that there are five components including place attachment, sustainable urban design, green open space design, ecological riverfront design, and activity support that should be considered in the revitalization of the warehouse district. Those components are divided into two parts: building and open space at the riverbank. There are 13 design criteria for building at the riverbank, while there are 14 design criteria for open space at the riverbank. These design criteria can enrich the warehouse district’s revitalization by improving the visual quality of the urban environment.Keywords: design criteria; warehouse district; riverbank; Surabaya; revitalization.

  12. Design of data warehouse in teaching state based on OLAP and data mining

    Science.gov (United States)

    Zhou, Lijuan; Wu, Minhua; Li, Shuang

    2009-04-01

    The data warehouse and the data mining technology is one of information technology research hot topics. At present the data warehouse and the data mining technology in aspects and so on commercial, financial industry as well as enterprise's production, market marketing obtained the widespread application, but is relatively less in educational fields' application. Over the years, the teaching and management have been accumulating large amounts of data in colleges and universities, while the data can not be effectively used, in the light of social needs of the university development and the current status of data management, the establishment of data warehouse in university state, the better use of existing data, and on the basis dealing with a higher level of disposal --data mining are particularly important. In this paper, starting from the decision-making needs design data warehouse structure of university teaching state, and then through the design structure and data extraction, loading, conversion create a data warehouse model, finally make use of association rule mining algorithm for data mining, to get effective results applied in practice. Based on the data analysis and mining, get a lot of valuable information, which can be used to guide teaching management, thereby improving the quality of teaching and promoting teaching devotion in universities and enhancing teaching infrastructure. At the same time it can provide detailed, multi-dimensional information for universities assessment and higher education research.

  13. Improvements in the automated radioimmunoassay for cAMP or cGMP

    International Nuclear Information System (INIS)

    Brooker, G.

    1988-01-01

    The work others in developing antibodies and the original radioimmunoassay for cyclic nucleotides provides the basis for these sensitive assays. The acetylation radioimmunoassay for cyclic nucleotides has enabled the measurement of cyclic AMP and cyclic GMP in very small biological samples. This is because accurate determinations can be made in samples containing less than 1 fmol of cyclic AMP or cyclic GMP. The Gamma-Flo automated radioimmunoassay system has been adapted to these assays such that cyclic nucleotides can be automatically measured at a rate of about 60 samples/hr. The Gamma-Flo instrument provides high-precision assays and eliminates human intervention in all steps of the radioimmunoassay. The automated assay has been in continuous operation in our laboratory over the last 10 years and this chapter summarizes the methodology and delineates improvements which have occurred over that time frame. Details for the preparation of the radioligands apply also to the manual acetylated radioimmunoassay for cyclic nucleotides

  14. Improvement of Automated Identification of the Heart Wall in Echocardiography by Suppressing Clutter Component

    Science.gov (United States)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2013-07-01

    For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.

  15. Application of XML in real-time data warehouse

    Science.gov (United States)

    Zhao, Yanhong; Wang, Beizhan; Liu, Lizhao; Ye, Su

    2009-07-01

    At present, XML is one of the most widely-used technologies of data-describing and data-exchanging, and the needs for real-time data make real-time data warehouse a popular area in the research of data warehouse. What effects can we have if we apply XML technology to the research of real-time data warehouse? XML technology solves many technologic problems which are impossible to be addressed in traditional real-time data warehouse, and realize the integration of OLAP (On-line Analytical Processing) and OLTP (Online transaction processing) environment. Then real-time data warehouse can truly be called "real time".

  16. Warehouse receipts functioning to reduce market risk

    Directory of Open Access Journals (Sweden)

    Jovičić Daliborka

    2014-01-01

    Full Text Available Cereal production underlies the market risk to a great extent due to its elastic demand. Prices of grain have cyclic movements and significant decline in the harvest periods as a result of insufficient supply and high demand. The very specificity of agricultural production leads to the fact that agricultures are forced to sell their products at unfavorable conditions in order to resume production. The Public Warehouses System allows the agriculturers, who were previously unable to use the bank loans to finance the continuation of their production, to efficiently acquire the necessary funds, by the support of the warehouse receipts which serve as collaterals. Based on the results obtained by applying statistical methods (variance and standard deviation, as a measure of market risk under the assumption that warehouse receipts' prices will approximately follow the overall consumer price index, it can be concluded that the warehouse receipts trade will have a significant impact on risk reduction in cereal production. Positive effects can be manifested through the stabilization of prices, reduction of cyclic movements in the production of basic grains and, in the final stage, on the country's food security.

  17. IR and OLAP in XML document warehouses

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Pedersen, Torben Bach; Berlanga, Rafael

    2005-01-01

    In this paper we propose to combine IR and OLAP (On-Line Analytical Processing) technologies to exploit a warehouse of text-rich XML documents. In the system we plan to develop, a multidimensional implementation of a relevance modeling document model will be used for interactively querying...

  18. Information Support of Processes in Warehouse Logistics

    Directory of Open Access Journals (Sweden)

    Gordei Kirill

    2013-11-01

    Full Text Available In the conditions of globalization and the world economic communications, the role of information support of business processes increases in various branches and fields of activity. There is not an exception for the warehouse activity. Such information support is realized in warehouse logistic systems. In relation to territorial administratively education, the warehouse logistic system gets a format of difficult social and economic structure which controls the economic streams covering the intermediary, trade and transport organizations and the enterprises of other branches and spheres. Spatial movement of inventory items makes new demands to participants of merchandising. Warehousing (in the meaning – storage – is one of the operations entering into logistic activity, on the organization of a material stream, as a requirement. Therefore, warehousing as "management of spatial movement of stocks" – is justified. Warehousing, in such understanding, tries to get rid of the perception as to containing stocks – a business expensive. This aspiration finds reflection in the logistic systems working by the principle: "just in time", "economical production" and others. Therefore, the role of warehouses as places of storage is transformed to understanding of warehousing as an innovative logistic system.

  19. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  20. Warehouse operations planning model for Bausch & Lomb

    NARCIS (Netherlands)

    Atilgan, Ceren

    2009-01-01

    Operations planning is a major part of the Sales& Operations Planning (S&OP) process. It provides an overview on the operations capacity requirements by considering the supply and demand plan. However, Bausch& Lomb does not have a structured operations planning process for their warehouse

  1. ¿Why Data warehouse & Business Intelligence at Universidad Simon Bolivar?

    Directory of Open Access Journals (Sweden)

    Kamagate Azoumana

    2013-01-01

    Full Text Available Abstract The data warehouse is supposed to provide storage, functionality and responsiveness to queries beyond the capabilities of today’s transaction databases. Also Data warehouse is built to improve the data access performance of databases.   Resumen Los almacenes de datos se supone que proporcionan almacenamiento, funcionalidad y capacidad de repuesta a las consultas y análisis más eficiente que las bases de datos transaccionales. También el almacén de datos se construiye para mejorar el rendimiento de acceso a los datos.

  2. Managing dual warehouses with an incentive policy for deteriorating items

    Science.gov (United States)

    Yu, Jonas C. P.; Wang, Kung-Jeng; Lin, Yu-Siang

    2016-02-01

    Distributors in a supply chain usually limit their own warehouse in finite capacity for cost reduction and excess stock is held in a rent warehouse. In this study, we examine inventory control for deteriorating items in a two-warehouse setting. Assuming that there is an incentive offered by a rent warehouse that allows the rental fee to decrease over time, the objective of this study is to maximise the joint profit of the manufacturer and the distributor. An optimisation procedure is developed to derive the optimal joint economic lot size policy. Several criteria are identified to select the most appropriate warehouse configuration and inventory policy on the basis of storage duration of materials in a rent warehouse. Sensitivity analysis is done to examine the results of model robustness. The proposed model enables a manufacturer with a channel distributor to coordinate the use of alternative warehouses, and to maximise the joint profit of the manufacturer and the distributor.

  3. Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses

    Science.gov (United States)

    Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon

    This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.

  4. Finding patients using similarity measures in a rare diseases-oriented clinical data warehouse: Dr. Warehouse and the needle in the needle stack.

    Science.gov (United States)

    Garcelon, Nicolas; Neuraz, Antoine; Benoit, Vincent; Salomon, Rémi; Kracker, Sven; Suarez, Felipe; Bahi-Buisson, Nadia; Hadj-Rabia, Smail; Fischer, Alain; Munnich, Arnold; Burgun, Anita

    2017-09-01

    In the context of rare diseases, it may be helpful to detect patients with similar medical histories, diagnoses and outcomes from a large number of cases with automated methods. To reduce the time to find new cases, we developed a method to find similar patients given an index case leveraging data from the electronic health records. We used the clinical data warehouse of a children academic hospital in Paris, France (Necker-Enfants Malades), containing about 400,000 patients. Our model was based on a vector space model (VSM) to compute the similarity distance between an index patient and all the patients of the data warehouse. The dimensions of the VSM were built upon Unified Medical Language System concepts extracted from clinical narratives stored in the clinical data warehouse. The VSM was enhanced using three parameters: a pertinence score (TF-IDF of the concepts), the polarity of the concept (negated/not negated) and the minimum number of concepts in common. We evaluated this model by displaying the most similar patients for five different rare diseases: Lowe Syndrome (LOWE), Dystrophic Epidermolysis Bullosa (DEB), Activated PI3K delta Syndrome (APDS), Rett Syndrome (RETT) and Dowling Meara (EBS-DM), from the clinical data warehouse representing 18, 103, 21, 84 and 7 patients respectively. The percentages of index patients returning at least one true positive similar patient in the Top30 similar patients were 94% for LOWE, 97% for DEB, 86% for APDS, 71% for EBS-DM and 99% for RETT. The mean number of patients with the exact same genetic diseases among the 30 returned patients was 51%. This tool offers new perspectives in a translational context to identify patients for genetic research. Moreover, when new molecular bases are discovered, our strategy will help to identify additional eligible patients for genetic screening. Copyright © 2017. Published by Elsevier Inc.

  5. Medical ADP Systems: Automated Medical Records Hold Promise to Improve Patient Care

    Science.gov (United States)

    1991-01-01

    automated medical records. The report discusses the potential benefits that automation could make to the quality of patient care and the factors that impede...information systems, but no organization has fully automated one of the most critical types of information, patient medical records. The patient medical record...its review of automated medical records. GAO’s objectives in this study were to identify the (1) benefits of automating patient records and (2) factors

  6. A clinician friendly data warehouse oriented toward narrative reports: Dr. Warehouse.

    Science.gov (United States)

    Garcelon, Nicolas; Neuraz, Antoine; Salomon, Rémi; Faour, Hassan; Benoit, Vincent; Delapalme, Arthur; Munnich, Arnold; Burgun, Anita; Rance, Bastien

    2018-04-01

    Clinical data warehouses are often oriented toward integration and exploration of coded data. However narrative reports are of crucial importance for translational research. This paper describes Dr. Warehouse®, an open source data warehouse oriented toward clinical narrative reports and designed to support clinicians' day-to-day use. Dr. Warehouse relies on an original database model to focus on documents in addition to facts. Besides classical querying functionalities, the system provides an advanced search engine and Graphical User Interfaces adapted to the exploration of text. Dr. Warehouse is dedicated to translational research with cohort recruitment capabilities, high throughput phenotyping and patient centric views (including similarity metrics among patients). These features leverage Natural Language Processing based on the extraction of UMLS® concepts, as well as negation and family history detection. A survey conducted after 6 months of use at the Necker Children's Hospital shows a high rate of satisfaction among the users (96.6%). During this period, 122 users performed 2837 queries, accessed 4,267 patients' records and included 36,632 patients in 131 cohorts. The source code is available at this github link https://github.com/imagine-bdd/DRWH. A demonstration based on PubMed abstracts is available at https://imagine-plateforme-bdd.fr/dwh_pubmed/. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Improved automated analysis of radon (222Rn) and thoron (220Rn) in natural waters.

    Science.gov (United States)

    Dimova, Natasha; Burnett, William C; Lane-Smith, Derek

    2009-11-15

    Natural radon ((222)Rn) and thoron ((220)Rn) can be used as tracers of various chemical and physical processes in the environment. We present here results from an extended series of laboratory experiments intended to improve the automated analysis of (222)Rn and (220)Rn in water using a modified RAD AQUA (Durridge Inc.) system. Previous experience with similar equipment showed that it takes about 30-40 min for the system to equilibrate to radon-in-water concentration increases and even longer for the response to return to baseline after a sharp spike. While the original water/gas exchanger setup was built only for radon-in-water measurement, our goal here is to provide an automated system capable of high resolution and good sensitivity for both radon- and thoron-in-water detections. We found that faster water flow rates substantially improved the response for both isotopes while thoron is detected most efficiently at airflow rates of 3 L/min. Our results show that the optimum conditions for fastest response and sensitivity for both isotopes are at water flow rates up to 17 L/min and an airflow rate of 3 L/min through the detector. Applications for such measurements include prospecting for naturally occurring radioactive material (NORM) in pipelines and locating points of groundwater/surface water interaction.

  8. Clinical Use of an Enterprise Data Warehouse

    Science.gov (United States)

    Evans, R. Scott; Lloyd, James F.; Pierce, Lee A.

    2012-01-01

    The enormous amount of data being collected by electronic medical records (EMR) has found additional value when integrated and stored in data warehouses. The enterprise data warehouse (EDW) allows all data from an organization with numerous inpatient and outpatient facilities to be integrated and analyzed. We have found the EDW at Intermountain Healthcare to not only be an essential tool for management and strategic decision making, but also for patient specific clinical decision support. This paper presents the structure and two case studies of a framework that has provided us the ability to create a number of decision support applications that are dependent on the integration of previous enterprise-wide data in addition to a patient’s current information in the EMR. PMID:23304288

  9. Warehouse site selection in an international environment

    Directory of Open Access Journals (Sweden)

    Sebastjan ŠKERLIČ

    2013-01-01

    Full Text Available The changed conditions in the automotive industry as the market and the production are moving from west to east, both at global and at European level, require constant adjustment from Slovenian companies. The companies strive to remain close to their customers and suppliers, as only by maintaining a high quality and streamlined supply chain, their existence within the demanding automotive industry is guaranteed in the long term. Choosing the right location for a warehouse in an international environment is therefore one of the most important strategic decisions that takes into account a number of interrelated factors such as transport networks, transport infrastructure, trade flows and the total cost. This paper aims to explore the important aspects of selecting a location for a warehouse and to identify potential international strategic locations, which could have a significant impact on the future operations of Slovenian companies in the global automotive industry.

  10. A framework for information warehouse development processes

    OpenAIRE

    Holten, Roland

    1999-01-01

    Since the terms Data Warehouse and On-Line Analytical Processing were proposed by Inmon and Codd, Codd, Sally respectively the traditional ideas of creating information systems in support of management¿s decision became interesting again in theory and practice. Today information warehousing is a strategic market for any data base systems vendor. Nevertheless the theoretical discussions of this topic go back to the early years of the 20th century as far as management science and accounting the...

  11. Warehouse Order-Picking Process. Review

    Directory of Open Access Journals (Sweden)

    E. V. Korobkov

    2015-01-01

    Full Text Available This article describes basic warehousing activities, namely: movement, information storage and transfer, as well as connections between typical warehouse operations (reception, transfer, assigning storage position and put-away, order-picking, hoarding and sorting, cross-docking, shipping. It presents a classification of the warehouse order-picking systems in terms of manual labor on offer as well as external (marketing channels, consumer’s demand structure, supplier’s replenishment structure and inventory level, total production demand, economic situation and internal (mechanization level, information accessibility, warehouse dimensionality, method of dispatch for shipping, zoning, batching, storage assignment method, routing method factors affecting the designing systems complexity. Basic optimization considerations are described. There is a literature review on the following sub-problems of planning and control of orderpicking processes.A layout design problem has been taken in account at two levels — external (facility layout problem and internal (aisle configuration problem. For a problem of distributing goods or stock keeping units the following methods are emphasized: random, nearest open storage position, and dedicated (COI-based, frequency-based distribution, as well as class-based and familygrouped (complimentary- and contact-based one. Batching problem can be solved by two main methods, i.e. proximity order batching (seed and saving algorithms and time-window order batching. There are two strategies for a zoning problem: progressive and synchronized, and also a special case of zoning — bucket brigades method. Hoarding/sorting problem is briefly reviewed. Order-picking routing problem will be thoroughly described in the next article of the cycle “Warehouse order-picking process”.

  12. Warehouses information system design and development

    Science.gov (United States)

    Darajatun, R. A.; Sukanta

    2017-12-01

    Materials/goods handling industry is fundamental for companies to ensure the smooth running of their warehouses. Efficiency and organization within every aspect of the business is essential in order to gain a competitive advantage. The purpose of this research is design and development of Kanban of inventory storage and delivery system. Application aims to facilitate inventory stock checks to be more efficient and effective. Users easily input finished goods from production department, warehouse, customer, and also suppliers. Master data designed as complete as possible to be prepared applications used in a variety of process logistic warehouse variations. The author uses Java programming language to develop the application, which is used for building Java Web applications, while the database used is MySQL. System development methodology that I use is the Waterfall methodology. Waterfall methodology has several stages of the Analysis, System Design, Implementation, Integration, Operation and Maintenance. In the process of collecting data the author uses the method of observation, interviews, and literature.

  13. Importance of public warehouse system for financing agribusiness sector

    Directory of Open Access Journals (Sweden)

    Zakić Vladimir

    2014-01-01

    Full Text Available The aim of this study was to determine the economic viability of the use of warehouse receipts for the storage of wheat and corn, based on the analysis of trends in product prices, storage costs in public warehouses and interest rate of loans against warehouse receipts. Agricultural producers are urged to sell grain at the harvest time when the price of agricultural products is usually lowest, mostly because of their needs for financial sources. Instead of selling products, farmers can store them in the public warehouses and use short-time financing by lending against warehouse receipt with usually lowest interest rate. In following months, farmers can sell products at higher price and repay short-term loan. This study showed that strategy of using public warehouses and postponing the sale of grains after harvest is profitable strategy for agricultural producers.

  14. Minimizing Warehouse Space through Inventory Reduction at Reckitt Benckiser

    OpenAIRE

    KILINC, IZGI SELEN

    2009-01-01

    This dissertation represents a ten week internship at pharmaceutical plant of Reckitt Benckiser for the Warehouse Stock Reduction Project. Due to foreseeable growth by the factory, there is increasing pressure to utilise existing warehouse space by reducing the existing stock level by 50 %. Therefore, this study aims to identify the opportunities to reduce the physical stock held in raw/pack materials in the warehouse and save space for additional manufacturing resources. The analysis demo...

  15. The impact of e-commerce on warehouse operations

    Directory of Open Access Journals (Sweden)

    Wiktor Żuchowski

    2016-03-01

    Full Text Available Background: We often encounter opinions concerning the unusual nature of warehouses used for the purposes of e-commerce, most often spread by providers of modern technological equipment and designers of such solutions. Of course, in the case of newly built facilities, it is advisable to consider innovative technologies, especially in terms of order picking. However, in many cases, the differences between "standard" warehouses, serving, for example, the vehicle spare parts market, and warehouses that are ready to handle retail orders placed electronically (defined as e-commerce are negligible. The scale of the differences between the existing "standard" warehouses and those adapted to handle e-commerce is dependent on the industry and supported of customers' structure. Methods: On the basis of experiences and on examples of enterprises two cases of the impact of a hypothetical e-commerce implementation for the warehouse organization and technology have been analysed. Results: The introduction of e-commerce into warehouses entails respective changes to previously handled orders. Warehouses serving the retail market are in principle prepared to process electronic orders. In this case, the introduction of (direct electronic sales is justified and feasible with relatively little effort. Conclusions: It cannot be said with certainty that the introduction of e-commerce in the warehouse is a revolution for its employees and managers. It depends on the markets in which the company operates, and on customers served by the warehouse prior to the introduction of e-commerce.

  16. Safety motion increase of trains by improvement diagnostics process devices of railway automation

    Directory of Open Access Journals (Sweden)

    B.M.Bondarenko

    2012-12-01

    Full Text Available The complex use of methods of nondestructive check for the automated diagnostics electromagnetic relays of railway automation the first class reliability is offered. The methods determination of their mechanical parameters are resulted, that allows to exclude a human factor from the control, promote reliability of devices railway automation and safety motion of railway transport.

  17. Improved protein hydrogen/deuterium exchange mass spectrometry platform with fully automated data processing.

    Science.gov (United States)

    Zhang, Zhongqi; Zhang, Aming; Xiao, Gang

    2012-06-05

    Protein hydrogen/deuterium exchange (HDX) followed by protease digestion and mass spectrometric (MS) analysis is accepted as a standard method for studying protein conformation and conformational dynamics. In this article, an improved HDX MS platform with fully automated data processing is described. The platform significantly reduces systematic and random errors in the measurement by introducing two types of corrections in HDX data analysis. First, a mixture of short peptides with fast HDX rates is introduced as internal standards to adjust the variations in the extent of back exchange from run to run. Second, a designed unique peptide (PPPI) with slow intrinsic HDX rate is employed as another internal standard to reflect the possible differences in protein intrinsic HDX rates when protein conformations at different solution conditions are compared. HDX data processing is achieved with a comprehensive HDX model to simulate the deuterium labeling and back exchange process. The HDX model is implemented into the in-house developed software MassAnalyzer and enables fully unattended analysis of the entire protein HDX MS data set starting from ion detection and peptide identification to final processed HDX output, typically within 1 day. The final output of the automated data processing is a set (or the average) of the most possible protection factors for each backbone amide hydrogen. The utility of the HDX MS platform is demonstrated by exploring the conformational transition of a monoclonal antibody by increasing concentrations of guanidine.

  18. Improvement of Vivarium Biodecontamination through Data-acquisition Systems and Automation.

    Science.gov (United States)

    Devan, Shakthi Rk; Vasu, Suresh; Mallikarjuna, Yogesha; Ponraj, Ramkumar; Kamath, Gireesh; Poosala, Suresh

    2018-03-01

    Biodecontamination is important for eliminating pathogens at research animal facilities, thereby preventing contamination within barrier systems. We enhanced our facility's standard biodecontamination method to replace the traditional foggers, and the new system was used effectively after creating bypass ducts in HVAC units so that individual rooms could be isolated. The entire system was controlled by inhouse-developed supervisory control and data-acquisition software that supported multiple cycles of decontamination by equipment, which had different decontamination capacities, operated in parallel, and used different agents, including H2O2 vapor and ClO2 gas. The process was validated according to facility mapping, and effectiveness was assessed by using biologic (Geobacillus stearothermophilus) and chemical indicator strips, which were positioned before decontamination, and by sampling contact plates after the completion of each cycle. The results of biologic indicators showed 6-log reduction in microbial counts after successful decontamination cycles for both agents and found to be compatible with clean-room panels including commonly used materials in vivarium such as racks, cages, trolleys, cage changing stations, biosafety cabinets, refrigerators and other equipment in both procedure and animal rooms. In conclusion, the automated process enabled users to perform effective decontamination through multiple cycles with realtime documentation and provided additional capability to deal with potential outbreaks. Enabling software integration of automation improved quality-control systems in our vivarium.

  19. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    International Nuclear Information System (INIS)

    Gleisberg, Tanju

    2008-01-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  20. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Gleisberg, Tanju

    2008-07-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  1. A Multi-Agent Approach for the Extract-Transform-Load Process Support in Data Warehouses

    Directory of Open Access Journals (Sweden)

    Daniel Betancur-Calderón

    2012-06-01

    Full Text Available In order to provide an adequate solution in terms of robustness and automation in the process of Extract-Transform-Load (ETL in data warehouses, in this article a multi-agent model that gathers the strengths of other approaches like wrappers and ad-hoc solutions is presented. Such a model considers the heterogeneity and availability of the data sources as well as their distributed nature. For its validation an experiment was performed using simulated and real data, which demonstrated not only its technical feasibility but also its effectiveness in terms of the percentage of processed data and the time to accomplish it.

  2. Control Performance Management in Industrial Automation Assessment, Diagnosis and Improvement of Control Loop Performance

    CERN Document Server

    Jelali, Mohieddine

    2013-01-01

    Control Performance Management in Industrial Automation provides a coherent and self-contained treatment of a group of methods and applications of burgeoning importance to the detection and solution of problems with control loops that are vital in maintaining product quality, operational safety, and efficiency of material and energy consumption in the process industries. The monograph deals with all aspects of control performance management (CPM), from controller assessment (minimum-variance-control-based and advanced methods), to detection and diagnosis of control loop problems (process non-linearities, oscillations, actuator faults), to the improvement of control performance (maintenance, re-design of loop components, automatic controller re-tuning). It provides a contribution towards the development and application of completely self-contained and automatic methodologies in the field. Moreover, within this work, many CPM tools have been developed that goes far beyond available CPM packages. Control Perform...

  3. Automated linear regression tools improve RSSI WSN localization in multipath indoor environment

    Directory of Open Access Journals (Sweden)

    Laermans Eric

    2011-01-01

    Full Text Available Abstract Received signal strength indication (RSSI-based localization is emerging in wireless sensor networks (WSNs. Localization algorithms need to include the physical and hardware limitations of RSSI measurements in order to give more accurate results in dynamic real-life indoor environments. In this study, we use the Interdisciplinary Institute for Broadband Technology real-life test bed and present an automated method to optimize and calibrate the experimental data before offering them to a positioning engine. In a preprocessing localization step, we introduce a new method to provide bounds for the range, thereby further improving the accuracy of our simple and fast 2D localization algorithm based on corrected distance circles. A maximum likelihood algorithm with a mean square error cost function has a higher position error median than our algorithm. Our experiments further show that the complete proposed algorithm eliminates outliers and avoids any manual calibration procedure.

  4. Means to improve underground coal mine safety by automated control of methane drainage systems

    Directory of Open Access Journals (Sweden)

    Babut Gabriel Bujor

    2017-01-01

    Full Text Available Based on the critical analysis of the presently employed management of methane drainage systems operation in Jiu Valley collieries, the paper aims to assess the basic elements required to develop an automated monitoring and control system of these. The results obtained after studies and researches carried out also allowed formulating certain proposals regarding the modification of manual control procedures of methane drainage systems operation, in order to correlate them with the prescriptions of legislation requirements from countries having a well-developed mining industry. Putting in practice the mentioned proposals could have immediate and beneficial effects on increasing the methane drainage process efficiency, leading meanwhile to an improved working environment and, implicitly, to a higher level of occupational safety and health in Jiu Valley collieries.

  5. Developing and Integrating Advanced Movement Features Improves Automated Classification of Ciliate Species.

    Science.gov (United States)

    Soleymani, Ali; Pennekamp, Frank; Petchey, Owen L; Weibel, Robert

    2015-01-01

    Recent advances in tracking technologies such as GPS or video tracking systems describe the movement paths of individuals in unprecedented details and are increasingly used in different fields, including ecology. However, extracting information from raw movement data requires advanced analysis techniques, for instance to infer behaviors expressed during a certain period of the recorded trajectory, or gender or species identity in case data is obtained from remote tracking. In this paper, we address how different movement features affect the ability to automatically classify the species identity, using a dataset of unicellular microbes (i.e., ciliates). Previously, morphological attributes and simple movement metrics, such as speed, were used for classifying ciliate species. Here, we demonstrate that adding advanced movement features, in particular such based on discrete wavelet transform, to morphological features can improve classification. These results may have practical applications in automated monitoring of waste water facilities as well as environmental monitoring of aquatic systems.

  6. Harnessing the Power of Scientific Data Warehouses

    Directory of Open Access Journals (Sweden)

    Kevin Deeb

    2005-04-01

    Full Text Available Data warehousing architecture should generally protect the confidentiality of data before it can be published, provide sufficient granularity to enable scientists to variously manipulate data, support robust metadata services, and define standardized spatial components. Data can then be transformed into information that would make them readily available in a common format that is easily accessible, fast, and bridges the islands of dispersed information. The benefits of the warehouse can be further enhanced by adding a spatial component so that the data can be brought to life, overlapping layers of information in a format that is easily grasped by management, enabling them to tease out trends in their areas of expertise.

  7. Improved cancer detection in automated breast ultrasound by radiologists using Computer Aided Detection

    Energy Technology Data Exchange (ETDEWEB)

    Zelst, J.C.M. van, E-mail: Jan.vanZelst@radboudumc.nl [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Tan, T.; Platel, B. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Jong, M. de [Jeroen Bosch Medical Centre, Department of Radiology, ‘s-Hertogenbosch (Netherlands); Steenbakkers, A. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Mourits, M. [Jeroen Bosch Medical Centre, Department of Radiology, ‘s-Hertogenbosch (Netherlands); Grivegnee, A. [Jules Bordet Institute, Department of Radiology, Brussels (Belgium); Borelli, C. [Catholic University of the Sacred Heart, Department of Radiological Sciences, Rome (Italy); Karssemeijer, N.; Mann, R.M. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands)

    2017-04-15

    Objective: To investigate the effect of dedicated Computer Aided Detection (CAD) software for automated breast ultrasound (ABUS) on the performance of radiologists screening for breast cancer. Methods: 90 ABUS views of 90 patients were randomly selected from a multi-institutional archive of cases collected between 2010 and 2013. This dataset included normal cases (n = 40) with >1 year of follow up, benign (n = 30) lesions that were either biopsied or remained stable, and malignant lesions (n = 20). Six readers evaluated all cases with and without CAD in two sessions. CAD-software included conventional CAD-marks and an intelligent minimum intensity projection of the breast tissue. Readers reported using a likelihood-of-malignancy scale from 0 to 100. Alternative free-response ROC analysis was used to measure the performance. Results: Without CAD, the average area-under-the-curve (AUC) of the readers was 0.77 and significantly improved with CAD to 0.84 (p = 0.001). Sensitivity of all readers improved (range 5.2–10.6%) by using CAD but specificity decreased in four out of six readers (range 1.4–5.7%). No significant difference was observed in the AUC between experienced radiologists and residents both with and without CAD. Conclusions: Dedicated CAD-software for ABUS has the potential to improve the cancer detection rates of radiologists screening for breast cancer.

  8. Improved cancer detection in automated breast ultrasound by radiologists using Computer Aided Detection

    International Nuclear Information System (INIS)

    Zelst, J.C.M. van; Tan, T.; Platel, B.; Jong, M. de; Steenbakkers, A.; Mourits, M.; Grivegnee, A.; Borelli, C.; Karssemeijer, N.; Mann, R.M.

    2017-01-01

    Objective: To investigate the effect of dedicated Computer Aided Detection (CAD) software for automated breast ultrasound (ABUS) on the performance of radiologists screening for breast cancer. Methods: 90 ABUS views of 90 patients were randomly selected from a multi-institutional archive of cases collected between 2010 and 2013. This dataset included normal cases (n = 40) with >1 year of follow up, benign (n = 30) lesions that were either biopsied or remained stable, and malignant lesions (n = 20). Six readers evaluated all cases with and without CAD in two sessions. CAD-software included conventional CAD-marks and an intelligent minimum intensity projection of the breast tissue. Readers reported using a likelihood-of-malignancy scale from 0 to 100. Alternative free-response ROC analysis was used to measure the performance. Results: Without CAD, the average area-under-the-curve (AUC) of the readers was 0.77 and significantly improved with CAD to 0.84 (p = 0.001). Sensitivity of all readers improved (range 5.2–10.6%) by using CAD but specificity decreased in four out of six readers (range 1.4–5.7%). No significant difference was observed in the AUC between experienced radiologists and residents both with and without CAD. Conclusions: Dedicated CAD-software for ABUS has the potential to improve the cancer detection rates of radiologists screening for breast cancer.

  9. Design and Control of Warehouse Order Picking: a literature review

    NARCIS (Netherlands)

    M.B.M. de Koster (René); T. Le-Duc (Tho); K.J. Roodbergen (Kees-Jan)

    2006-01-01

    textabstractOrder picking has long been identified as the most labour-intensive and costly activity for almost every warehouse; the cost of order picking is estimated to be as much as 55% of the total warehouse operating expense. Any underperformance in order picking can lead to unsatisfactory

  10. Building a data warehouse with examples in SQL server

    CERN Document Server

    Rainardi, Vincent

    2008-01-01

    ""Building a Data Warehouse: With Examples in SQL Server"" describes how to build a data warehouse completely from scratch and shows practical examples on how to do it. Author Rainardi also describes some practical issues that developers are likely to encounter in their first data warehousing project, along with solutions and advice.

  11. 27 CFR 46.236 - Articles in a warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false Articles in a warehouse... Tubes Held for Sale on April 1, 2009 Filing Requirements § 46.236 Articles in a warehouse. (a) Articles... articles will be offered for sale. (b) Articles offered for sale at several locations must be reported on a...

  12. Multidimensi Pada Data Warehouse Dengan Menggunakan Rumus Kombinasi

    OpenAIRE

    Hendric, Spits Warnars Harco Leslie

    2006-01-01

    Multidimensional in data warehouse is a compulsion and become the most important for information delivery, without multidimensional data warehouse is incomplete. Multidimensional give the able to analyze business measurement in many different ways. Multidimensional is also synonymous with online analytical processing (OLAP).

  13. Optimal time policy for deteriorating items of two-warehouse

    Indian Academy of Sciences (India)

    ... goods in which the first is rented warehouse and the second is own warehouse that deteriorates with two different rates. The aim of this study is to determine the optimal order quantity to maximize the profit of the projected model. Finally, some numerical examples and sensitivity analysis of parameters are made to validate ...

  14. 27 CFR 28.286 - Receipt in customs bonded warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Receipt in customs bonded... in Customs Bonded Warehouse § 28.286 Receipt in customs bonded warehouse. On receipt of the distilled spirits or wine and the related TTB Form 5100.11 or 5110.30 as the case may be, the customs officer in...

  15. 19 CFR 19.1 - Classes of customs warehouses.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Classes of customs warehouses. 19.1 Section 19.1 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY CUSTOMS WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE THEREIN § 19.1 Classes of...

  16. Identifying and Prioritizing Cleaner Production Strategies in Raw Materials’ Warehouse of Yazdbaf Textile Company in 2015

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ghaneian

    2017-03-01

    Full Text Available Introduction: Cleaner productions in textile industry is achieved by reducing water and chemicals’ consumption, saving energy, reducing production of air pollution and solid wastes, reducing toxicity and noise pollution through many solutions. The purpose of the present research was to apply Strengths, Weaknesses, Opportunities, Threats (SWOT and Quality Systems Planning Matrix (QSPM techniques in identifying and prioritizing production in raw materials’ warehouse of Yazdbaf Textile Factory. Materials and Methods: In this research, effective internal and external factors in cleaner production were identified by providing the required information through field visit and interview with industry managers and supervisors of raw materials’ warehouse. Finally, To form matrix of internal and external factors 17 important internal factors and 7 important external factors were identified and selected respectively.Then, QSPM matrix was formed to determine the attractiveness and priority of the selected strategies by using results of internal and external factors and SWOT matrixes. Results: According to the results, the total score of raw materials’ warehouse in Internal Factor Evaluation (IFE matrix is equal to 2.90 which shows the good situation of warehouse than the internal factors. However, the total score in External Factor Evaluation (EFE matrix is 2.14 and indicates the relative weak situation of warehouse than the external factors. Conclusion: Based on the obtained results, continuity, monitor, and improvement of the general plan of qualitative control (QC of raw materials and laboratory as well as more emphasis on quality indexes according to its importance in the production processes were selected as the most important strategies. 

  17. Handling Imprecision in Qualitative Data Warehouse: Urban Building Sites Annoyance Analysis Use Case

    Science.gov (United States)

    Amanzougarene, F.; Chachoua, M.; Zeitouni, K.

    2013-05-01

    Data warehouse means a decision support database allowing integration, organization, historisation, and management of data from heterogeneous sources, with the aim of exploiting them for decision-making. Data warehouses are essentially based on multidimensional model. This model organizes data into facts (subjects of analysis) and dimensions (axes of analysis). In classical data warehouses, facts are composed of numerical measures and dimensions which characterize it. Dimensions are organized into hierarchical levels of detail. Based on the navigation and aggregation mechanisms offered by OLAP (On-Line Analytical Processing) tools, facts can be analyzed according to the desired level of detail. In real world applications, facts are not always numerical, and can be of qualitative nature. In addition, sometimes a human expert or learned model such as a decision tree provides a qualitative evaluation of phenomenon based on its different parameters i.e. dimensions. Conventional data warehouses are thus not adapted to qualitative reasoning and have not the ability to deal with qualitative data. In previous work, we have proposed an original approach of qualitative data warehouse modeling, which permits integrating qualitative measures. Based on computing with words methodology, we have extended classical multidimensional data model to allow the aggregation and analysis of qualitative data in OLAP environment. We have implemented this model in a Spatial Decision Support System to help managers of public spaces to reduce annoyances and improve the quality of life of the citizens. In this paper, we will focus our study on the representation and management of imprecision in annoyance analysis process. The main objective of this process consists in determining the least harmful scenario of urban building sites, particularly in dense urban environments.

  18. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning.

    Science.gov (United States)

    Abràmoff, Michael David; Lou, Yiyue; Erginay, Ali; Clarida, Warren; Amelon, Ryan; Folk, James C; Niemeijer, Meindert

    2016-10-01

    To compare performance of a deep-learning enhanced algorithm for automated detection of diabetic retinopathy (DR), to the previously published performance of that algorithm, the Iowa Detection Program (IDP)-without deep learning components-on the same publicly available set of fundus images and previously reported consensus reference standard set, by three US Board certified retinal specialists. We used the previously reported consensus reference standard of referable DR (rDR), defined as International Clinical Classification of Diabetic Retinopathy moderate, severe nonproliferative (NPDR), proliferative DR, and/or macular edema (ME). Neither Messidor-2 images, nor the three retinal specialists setting the Messidor-2 reference standard were used for training IDx-DR version X2.1. Sensitivity, specificity, negative predictive value, area under the curve (AUC), and their confidence intervals (CIs) were calculated. Sensitivity was 96.8% (95% CI: 93.3%-98.8%), specificity was 87.0% (95% CI: 84.2%-89.4%), with 6/874 false negatives, resulting in a negative predictive value of 99.0% (95% CI: 97.8%-99.6%). No cases of severe NPDR, PDR, or ME were missed. The AUC was 0.980 (95% CI: 0.968-0.992). Sensitivity was not statistically different from published IDP sensitivity, which had a CI of 94.4% to 99.3%, but specificity was significantly better than the published IDP specificity CI of 55.7% to 63.0%. A deep-learning enhanced algorithm for the automated detection of DR, achieves significantly better performance than a previously reported, otherwise essentially identical, algorithm that does not employ deep learning. Deep learning enhanced algorithms have the potential to improve the efficiency of DR screening, and thereby to prevent visual loss and blindness from this devastating disease.

  19. Analisis Dan Perancangan Data Warehouse Pada PT Gajah Tunggal Prakarsa

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2010-12-01

    Full Text Available The purpose of this helpful in making decisions more quickly and precisely. Research methodology includes analysis study was to analyze the data base support in helping decisions making, identifying needs and designing a data warehouse. With the support of data warehouse, company leaders can be more of current systems, library research, designing a data warehouse using star schema. The result of this research is the availability of a data warehouse that can generate information quickly and precisely, thus helping the company in making decisions. The conclusion of this research is the application of data warehouse can be a media aide related parties on PT. Gajah Tunggal initiative in decision making. 

  20. Automatic generation of warehouse mediators using an ontology engine

    Energy Technology Data Exchange (ETDEWEB)

    Critchlow, T., LLNL

    1998-04-01

    Data warehouses created for dynamic scientific environments, such as genetics, face significant challenges to their long-term feasibility One of the most significant of these is the high frequency of schema evolution resulting from both technological advances and scientific insight Failure to quickly incorporate these modifications will quickly render the warehouse obsolete, yet each evolution requires significant effort to ensure the changes are correctly propagated DataFoundry utilizes a mediated warehouse architecture with an ontology infrastructure to reduce the maintenance acquirements of a warehouse. Among the things, the ontology is used as an information source for automatically generating mediators, the methods that transfer data between the data sources and the warehouse The identification, definition and representation of the metadata required to perform this task is a primary contribution of this work.

  1. Automated and connected vehicle (AV/CV) test bed to improve transit, bicycle, and pedestrian safety : concept of operations plan.

    Science.gov (United States)

    2017-02-01

    This document presents the Concept of Operations (ConOps) Plan for the Automated and Connected Vehicle (AV/CV) Test Bed to Improve Transit, Bicycle, and Pedestrian Safety. As illustrated in Figure 1, the plan presents the overarching vision and goals...

  2. Protection of warehouses and plants under capacity constraint

    International Nuclear Information System (INIS)

    Bricha, Naji; Nourelfath, Mustapha

    2015-01-01

    While warehouses may be subjected to less protection effort than plants, their unavailability may have substantial impact on the supply chain performance. This paper presents a method for protection of plants and warehouses against intentional attacks in the context of the capacitated plant and warehouses location and capacity acquisition problem. A non-cooperative two-period game is developed to find the equilibrium solution and the optimal defender strategy under capacity constraints. The defender invests in the first period to minimize the expected damage and the attacker moves in the second period to maximize the expected damage. Extra-capacity of neighboring functional plants and warehouses is used after attacks, to satisfy all customers demand and to avoid the backorders. The contest success function is used to evaluate success probability of an attack of plants and warehouses. A numerical example is presented to illustrate an application of the model. The defender strategy obtained by our model is compared to the case where warehouses are subjected to less protection effort than the plants. This comparison allows us to measure how much our method is better, and illustrates the effect of direct investments in protection and indirect protection by warehouse extra-capacities to reduce the expected damage. - Highlights: • Protection of warehouses and plants against intentional attacks. • Capacitated plant and warehouse location and capacity acquisition problem. • A non-cooperative two-period game between the defender and the attacker. • A method to evaluate the utilities and determine the optimal defender strategy. • Using warehouse extra-capacities to reduce the expected damage

  3. The Data Warehouse: Keeping It Simple. MIT Shares Valuable Lessons Learned from a Successful Data Warehouse Implementation.

    Science.gov (United States)

    Thorne, Scott

    2000-01-01

    Explains why the data warehouse is important to the Massachusetts Institute of Technology community, describing its basic functions and technical design points; sharing some non-technical aspects of the school's data warehouse implementation that have proved to be important; examining the importance of proper training in a successful warehouse…

  4. Storage of hazardous substances in bonded warehouses

    International Nuclear Information System (INIS)

    Villalobos Artavia, Beatriz

    2008-01-01

    A variety of special regulations exist in Costa Rica for registration and transport of hazardous substances; these set the requirements for entry into the country and the security of transport units. However, the regulations mentioned no specific rules for storing hazardous substances. Tax deposits have been the initial place where are stored the substances that enter the country.The creation of basic rules that would be regulating the storage of hazardous substances has taken place through the analysis of regulations and national and international laws governing hazardous substances. The regulatory domain that currently exists will be established with a field research in fiscal deposits in the metropolitan area. The storage and security measures that have been used by the personnel handling the substances will be identified to be putting the reality with that the hazardous substances have been handled in tax deposits. A rule base for the storage of hazardous substances in tax deposits can be made, protecting the safety of the environment in which are manipulated and avoiding a possible accident causing a mess around. The rule will have the characteristics of the storage warehouses hazardous substances, such as safety standards, labeling standards, infrastructure features, common storage and transitional measures that must possess and meet all bonded warehouses to store hazardous substances. (author) [es

  5. Key Performance Indicators to Measure Improvement After Implementation of Total Laboratory Automation Abbott Accelerator a3600.

    Science.gov (United States)

    Miler, Marijana; Nikolac Gabaj, Nora; Dukic, Lora; Simundic, Ana-Maria

    2017-12-27

    The aim of the study was to estimate improvement of work efficiency in the laboratory after implementation of total laboratory automation (TLA) by Abbott Accelerator a3600 in the laboratory with measuring different key performance indicators (KPIs) before and after TLA implementation. The objective was also to recommend steps for defining KPIs in other laboratories. For evaluation of improvement 10 organizational and/or technical KPIs were defined for all phases of laboratory work and measured before (November 2013) and after (from 2015 to 2017) TLA implementation. Out of 10 defined KPIs, 9 were successfully measured and significantly improved. Waiting time for registration of samples in the LIS was significantly reduced from 16 (9-28) to 9 (6-16) minutes after TLA (P performed at core biochemistry analyzers which significantly reduced walking distance for sample management (for more than 800 m per worker) and number of tube touches (for almost 50%). Analyzers downtime and engagement time for analyzers maintenance was reduced for 50 h and 28 h per month, respectively. TLA eliminated manual dilution of samples with extreme results with sigma values increment from 3.4 to >6 after TLA. Although median turnaround time TAT for potassium and troponin was higher (for approximately 20 min), number of outliers with TAT >60 min expressed as sigma values were satisfying (>3). Implementation of the TLA improved the most of the processes in our laboratory with 9 out of 10 properly defined and measured KPIs. With proper planning and defining of KPIs, every laboratory could measure changes in daily workflow.

  6. [Development of a microbiology data warehouse (Akita-ReNICS) for networking hospitals in a medical region].

    Science.gov (United States)

    Ueki, Shigeharu; Kayaba, Hiroyuki; Tomita, Noriko; Kobayashi, Noriko; Takahashi, Tomoe; Obara, Toshikage; Takeda, Masahide; Moritoki, Yuki; Itoga, Masamichi; Ito, Wataru; Ohsaga, Atsushi; Kondoh, Katsuyuki; Chihara, Junichi

    2011-04-01

    The active involvement of hospital laboratory in surveillance is crucial to the success of nosocomial infection control. The recent dramatic increase of antimicrobial-resistant organisms and their spread into the community suggest that the infection control strategy of independent medical institutions is insufficient. To share the clinical data and surveillance in our local medical region, we developed a microbiology data warehouse for networking hospital laboratories in Akita prefecture. This system, named Akita-ReNICS, is an easy-to-use information management system designed to compare, track, and report the occurrence of antimicrobial-resistant organisms. Participating laboratories routinely transfer their coded and formatted microbiology data to ReNICS server located at Akita University Hospital from their health care system's clinical computer applications over the internet. We established the system to automate the statistical processes, so that the participants can access the server to monitor graphical data in the manner they prefer, using their own computer's browser. Furthermore, our system also provides the documents server, microbiology and antimicrobiotic database, and space for long-term storage of microbiological samples. Akita-ReNICS could be a next generation network for quality improvement of infection control.

  7. Benchmarking distributed data warehouse solutions for storing genomic variant information

    Science.gov (United States)

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require

  8. An Automated Inpatient Split-dose Bowel Preparation System Improves Colonoscopy Quality and Reduces Repeat Procedures.

    Science.gov (United States)

    Yadlapati, Rena; Johnston, Elyse R; Gluskin, Adam B; Gregory, Dyanna L; Cyrus, Rachel; Werth, Lindsay; Ciolino, Jody D; Grande, David P; Keswani, Rajesh N

    2017-07-19

    Inpatient colonoscopy preparations are often inadequate, compromising patient safety and procedure quality, while resulting in greater hospital costs. The aims of this study were to: (1) design and implement an electronic inpatient split-dose bowel preparation order set; (2) assess the intervention's impact upon preparation adequacy, repeated colonoscopies, hospital days, and costs. We conducted a single center prospective pragmatic quasiexperimental study of hospitalized adults undergoing colonoscopy. The experimental intervention was designed using DMAIC (define, measure, analyze, improve, and control) methodology. Prospective data collected over 12 months were compared with data from a historical preintervention cohort. The primary outcome was bowel preparation quality and secondary outcomes included number of repeated procedures, hospital days, and costs. On the basis of a Delphi method and DMAIC process, we created an electronic inpatient bowel preparation order set inclusive of a split-dose bowel preparation algorithm, automated orders for rescue medications, and nursing bowel preparation checks. The analysis data set included 969 patients, 445 (46%) in the postintervention group. The adequacy of bowel preparation significantly increased following intervention (86% vs. 43%; P<0.01) and proportion of repeated procedures decreased (2.0% vs. 4.6%; P=0.03). Mean hospital days from bowel preparation initiation to discharge decreased from 8.0 to 6.9 days (P=0.02). The intervention resulted in an estimated 1-year cost-savings of $46,076 based on a reduction in excess hospital days associated with repeated and delayed procedures. Our interdisciplinary initiative targeting inpatient colonoscopy preparations significantly improved quality and reduced repeat procedures, and hospital days. Other institutions should consider utilizing this framework to improve inpatient colonoscopy value.

  9. Improved materials management through client/server computing

    International Nuclear Information System (INIS)

    Brooks, D.; Neilsen, E.; Reagan, R.; Simmons, D.

    1992-01-01

    This paper reports that materials management and procurement impacts every organization within an electric utility from power generation to customer service. An efficient material management and procurement system can help improve productivity and minimize operating costs. It is no longer sufficient to simply automate materials management using inventory control systems. Smart companies are building centralized data warehouses and use the client/server style of computing to provide real time data access. This paper describes how Alabama Power Company, Southern Company Services and Digital Equipment Corporation transformed two existing applications, a purchase order application within DEC's ALL-IN-1 environment and a materials management application within an IBM CICS environment, into a data warehouse - client/server application. An application server is used to overcome incompatibilities between computing environments and provide easy, real-time access to information residing in multi-vendor environments

  10. The pre-positioning of warehouses at regional and local levels for a humanitarian relief organisation

    OpenAIRE

    Roh, Saeyeon; Pettit, Stephen John; Harris, Irina; Beresford, Anthony Kenneth Charles

    2015-01-01

    Using pre-positioned warehouses at strategic locations around the world is an approach commonly taken by some humanitarian relief organisations to improve their capacities to deliver sufficient relief aid within a relatively short time frame, and to provide shelter and assistance to disaster victims. Although research into the facility location problem is extensive in both theory and application, such approaches have received little attention from the humanitarian relief perspective. In this ...

  11. An automated hand hygiene compliance system is associated with improved monitoring of hand hygiene.

    Science.gov (United States)

    McCalla, Saungi; Reilly, Maggie; Thomas, Rowena; McSpedon-Rai, Dawn

    2017-05-01

    Consistent hand hygiene is key to reducing health care-associated infections (HAIs) and assessing compliance with hand hygiene protocols is vital for hospital infection control staff. A new automated hand hygiene compliance system (HHCS) was trialed as an alternative to human observers in an intensive care unit and an intensive care stepdown unit at a hospital facility in the northeastern United States. Using a retrospective cohort design, researchers investigated whether implementation of the HHCS resulted in improved hand hygiene compliance and a reduction in common HAI rates. Pearson χ 2 tests were used to assess changes in compliance, and incidence rate ratios were used to test for significant differences in infection rates. During the study period, the HHCS collected many more hand hygiene events compared with human observers (632,404 vs 480) and ensured that the hospital met its compliance goals (95%+). Although decreases in multidrug-resistant organisms, central line-associated bloodstream infections, and catheter-associated urinary tract infection rates were observed, they represented nonsignificant differences. Human hand hygiene observers may not report accurate measures of compliance. The HHCS is a promising new tool for fine-grained assessment of hand hygiene compliance. Further study is needed to examine the association between the HHCS and HAI rate reduction. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Automation of information decision support to improve e-learning resources quality

    Directory of Open Access Journals (Sweden)

    A.L. Danchenko

    2013-06-01

    Full Text Available Purpose. In conditions of active development of e-learning the high quality of e-learning resources is very important. Providing the high quality of e-learning resources in situation with mass higher education and rapid obsolescence of information requires the automation of information decision support for improving the quality of e-learning resources by development of decision support system. Methodology. The problem is solved by methods of artificial intelligence. The knowledge base of information structure of decision support system that is based on frame model of knowledge representation and inference production rules are developed. Findings. According to the results of the analysis of life cycle processes and requirements to the e-learning resources quality the information model of the structure of the knowledge base of the decision support system, the inference rules for the automatically generating of recommendations and the software implementation are developed. Practical value. It is established that the basic requirements for quality are performance, validity, reliability and manufacturability. It is shown that the using of a software implementation of decision support system for researched courses gives a growth of the quality according to the complex quality criteria. The information structure of a knowledge base system to support decision-making and rules of inference can be used by methodologists and content developers of learning systems.

  13. Improving the automated optimization of profile extrusion dies by applying appropriate optimization areas and strategies

    Science.gov (United States)

    Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.

    2014-05-01

    The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.

  14. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  15. Warehouse order-picking process. Order-picker routing problem

    Directory of Open Access Journals (Sweden)

    E. V. Korobkov

    2015-01-01

    Full Text Available This article continues “Warehouse order-picking process” cycle and describes order-picker routing sub-problem of a warehouse order-picking process. It draws analogies between the orderpickers’ routing problem and traveling salesman’s problem, shows differences between the standard problem statement of a traveling salesman and routing problem of warehouse orderpickers, and gives the particular Steiner’s problem statement of a traveling salesman.Warehouse layout with a typical order is represented by a graph, with some its vertices corresponding to mandatory order-picker’s visits and some other ones being noncompulsory. The paper describes an optimal Ratliff-Rosenthal algorithm to solve order-picker’s routing problem for the single-block warehouses, i.e. warehouses with only two crossing aisles, defines seven equivalent classes of partial routing sub-graphs and five transitions used to have an optimal routing sub-graph of a order-picker. An extension of optimal Ratliff-Rosenthal order-picker routing algorithm for multi-block warehouses is presented and also reasons for using the routing heuristics instead of exact optimal algorithms are given. The paper offers algorithmic description of the following seven routing heuristics: S-shaped, return, midpoint, largest gap, aisle-by-aisle, composite, and combined as well as modification of combined heuristics. The comparison of orderpicker routing heuristics for one- and two-block warehouses is to be described in the next article of the “Warehouse order-picking process” cycle.

  16. Use of automated reminder letters to improve diabetes management in primary care: outcomes of a quality improvement initiative.

    Science.gov (United States)

    Berryman, Sally H; Sick, Brian T; Wang, Qi; Swan, Paul J; Weber-Main, Anne Marie

    2013-01-01

    Effective management of patients with diabetes mellitus (DM) can be time-consuming and costly. One patient-centred quality improvement strategy is to generate reminder letters to prompt patient action(s), but this strategy's effect on DM outcomes is uncertain. To determine whether using the electronic medical record to automatically generate reminder letters for patients not meeting recommended DM targets is associated with improvement in practice level quality metrics for DM management. Over 15 months, letters were sent monthly to all patients with DM in a large, urban, primary care teaching practice whose records for haemoglobin A1c (HbA1c), low-density lipoprotein (LDL) or blood pressure (BP) indicated non-compliance with recommended levels and testing intervals. Logistic regression was used to analyse cross-sectional, practice-level differences in the proportion of patients meeting DM quality metrics (HbA1c < 7%, LDL < 100 mg/dl and BP < 130/80 mmHg; rates of checking each value within the last 12 months; and a composite of these five measures) across four time points: six months before the intervention, start of the intervention, end of the 15-month intervention period and six months after the intervention. The number of letters sent per month ranged from 284 to 392, representing 28-38% of all patients with DM. At the end of the intervention, patients' odds of being at goal were higher than before the intervention began for LDL < 100 mg/dl, and for HbA1c and LDL tested once within the last 12 months (or 1.24, P = 0.005; or 1.35, P = 0.03; or 1.48, P < 0.001, respectively). Post intervention, declines were seen in LDL checked within the last 12 months (or 0.76, P = 0.003) and in the composite endpoint (or 0.78, P = 0.005). The automated patient-reminder letter intervention was associated with modest improvements in several, but not all DM measures. This approach may be an effective tool for improving quality of care for patients with DM.

  17. Criticality calculation of the nuclear material warehouse of the ININ

    International Nuclear Information System (INIS)

    Garcia, T.; Angeles, A.; Flores C, J.

    2013-10-01

    In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)

  18. A PROPOSAL OF DATA QUALITY FOR DATA WAREHOUSES ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Leo Willyanto Santoso

    2006-01-01

    Full Text Available The quality of the data provided is critical to the success of data warehousing initiatives. There is strong evidence that many organisations have significant data quality problems, and that these have substantial social and economic impacts. This paper describes a study which explores modeling of the dynamic parts of the data warehouse. This metamodel enables data warehouse management, design and evolution based on a high level conceptual perspective, which can be linked to the actual structural and physical aspects of the data warehouse architecture. Moreover, this metamodel is capable of modeling complex activities, their interrelationships, the relationship of activities with data sources and execution details.

  19. Usage of data warehouse for analysing software's bugs

    Science.gov (United States)

    Živanov, Danijel; Krstićev, Danijela Boberić; Mirković, Duško

    2017-07-01

    We analysed the database schema of Bugzilla system and taking into account user's requirements for reporting, we presented a dimensional model for the data warehouse which will be used for reporting software defects. The idea proposed in this paper is not to throw away Bugzilla system because it certainly has many strengths, but to make integration of Bugzilla and the proposed data warehouse. Bugzilla would continue to be used for recording bugs that occur during the development and maintenance of software while the data warehouse would be used for storing data on bugs in an appropriate form, which is more suitable for analysis.

  20. The importance of data warehouses for physician executives.

    Science.gov (United States)

    Ruffin, M

    1994-11-01

    Soon, most physicians will begin to learn about data warehouses and clinical and financial data about their patients stored in them. What is a data warehouse? Why are we seeing their emergence in health care only now? How does a hospital, or group practice, or health plan acquire or create a data warehouse? Who should be responsible for it, and what sort of training is needed by those in charge of using it for the edification of the sponsoring organization? I'll try to answer these questions in this article.

  1. Selecting materialized views in a data warehouse

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Liu, Daxin

    2003-01-01

    A Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. In this paper, we have addressed and designed algorithm to select a set of views to materialize in order to answer the most queries under the constraint of a given space. The algorithm presented in this paper aim at making out a minimum set of views, by which we can directly respond to as many as possible user"s query requests. We use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithm shows that the proposed algorithm gives a less complexity and higher speeds and feasible expandability.

  2. From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction

    Science.gov (United States)

    Drnec, Kim; Marathe, Amar R.; Lukos, Jamie R.; Metcalfe, Jason S.

    2016-01-01

    Human automation interaction (HAI) systems have thus far failed to live up to expectations mainly because human users do not always interact with the automation appropriately. Trust in automation (TiA) has been considered a central influence on the way a human user interacts with an automation; if TiA is too high there will be overuse, if TiA is too low there will be disuse. However, even though extensive research into TiA has identified specific HAI behaviors, or trust outcomes, a unique mapping between trust states and trust outcomes has yet to be clearly identified. Interaction behaviors have been intensely studied in the domain of HAI and TiA and this has led to a reframing of the issues of problems with HAI in terms of reliance and compliance. We find the behaviorally defined terms reliance and compliance to be useful in their functionality for application in real-world situations. However, we note that once an inappropriate interaction behavior has occurred it is too late to mitigate it. We therefore take a step back and look at the interaction decision that precedes the behavior. We note that the decision neuroscience community has revealed that decisions are fairly stereotyped processes accompanied by measurable psychophysiological correlates. Two literatures were therefore reviewed. TiA literature was extensively reviewed in order to understand the relationship between TiA and trust outcomes, as well as to identify gaps in current knowledge. We note that an interaction decision precedes an interaction behavior and believe that we can leverage knowledge of the psychophysiological correlates of decisions to improve joint system performance. As we believe that understanding the interaction decision will be critical to the eventual mitigation of inappropriate interaction behavior, we reviewed the decision making literature and provide a synopsis of the state of the art understanding of the decision process from a decision neuroscience perspective. We forward

  3. From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction.

    Science.gov (United States)

    Drnec, Kim; Marathe, Amar R; Lukos, Jamie R; Metcalfe, Jason S

    2016-01-01

    Human automation interaction (HAI) systems have thus far failed to live up to expectations mainly because human users do not always interact with the automation appropriately. Trust in automation (TiA) has been considered a central influence on the way a human user interacts with an automation; if TiA is too high there will be overuse, if TiA is too low there will be disuse. However, even though extensive research into TiA has identified specific HAI behaviors, or trust outcomes, a unique mapping between trust states and trust outcomes has yet to be clearly identified. Interaction behaviors have been intensely studied in the domain of HAI and TiA and this has led to a reframing of the issues of problems with HAI in terms of reliance and compliance. We find the behaviorally defined terms reliance and compliance to be useful in their functionality for application in real-world situations. However, we note that once an inappropriate interaction behavior has occurred it is too late to mitigate it. We therefore take a step back and look at the interaction decision that precedes the behavior. We note that the decision neuroscience community has revealed that decisions are fairly stereotyped processes accompanied by measurable psychophysiological correlates. Two literatures were therefore reviewed. TiA literature was extensively reviewed in order to understand the relationship between TiA and trust outcomes, as well as to identify gaps in current knowledge. We note that an interaction decision precedes an interaction behavior and believe that we can leverage knowledge of the psychophysiological correlates of decisions to improve joint system performance. As we believe that understanding the interaction decision will be critical to the eventual mitigation of inappropriate interaction behavior, we reviewed the decision making literature and provide a synopsis of the state of the art understanding of the decision process from a decision neuroscience perspective. We forward

  4. Marketing automation processes as a way to improve contemporary marketing of a company

    OpenAIRE

    Witold Świeczak

    2013-01-01

    The main aim of this article is to identify the possibilities which are given to contemporary companies thanks to the usage of processes included in marketing automation system. This publication deals with the key aspects of this issue. Its shows how the importance of the organization changes, how its value increases, as a result of using the tools which are provided by the processes included in the concept of marketing automation. This article defines the factors and processes which influenc...

  5. Marketing automation processes as a way to improve contemporary marketing of a company

    Directory of Open Access Journals (Sweden)

    Witold Świeczak

    2013-09-01

    Full Text Available The main aim of this article is to identify the possibilities which are given to contemporary companies thanks to the usage of processes included in marketing automation system. This publication deals with the key aspects of this issue. Its shows how the importance of the organization changes, how its value increases, as a result of using the tools which are provided by the processes included in the concept of marketing automation. This article defines the factors and processes which influence an effective course of actions taken as a part of marketing automation. Because the concept of marketing automation is a completely new reality; it is giving up the communication based on mass distribution of a uniform contents for really personalized individual and fully automated communication. This is a completely new idea, a kind of coexistence, in which both a sales department and a marketing department cooperate with each other closely to achieve the best result. It is also a situation in which marketing can definitely confirm its contribution to the income generated by the company. But marketing automation also means huge analytical possibilities and a real increase of a company’s value, its value added generated by the system – the source of information about clients, about all processes both marketing and sales, taking place in a company. The introduction of marketing automation system alters not only the current functioning of a marketing department, but also marketers themselves. In fact, everything that marketing automation system provides, including primarily accumulated unique knowledge of the client, is also a critical marketing value of every modern enterprise.

  6. Improvement of the banana "Musa acuminata" reference sequence using NGS data and semi-automated bioinformatics methods.

    Science.gov (United States)

    Martin, Guillaume; Baurens, Franc-Christophe; Droc, Gaëtan; Rouard, Mathieu; Cenci, Alberto; Kilian, Andrzej; Hastie, Alex; Doležel, Jaroslav; Aury, Jean-Marc; Alberti, Adriana; Carreel, Françoise; D'Hont, Angélique

    2016-03-16

    Recent advances in genomics indicate functional significance of a majority of genome sequences and their long range interactions. As a detailed examination of genome organization and function requires very high quality genome sequence, the objective of this study was to improve reference genome assembly of banana (Musa acuminata). We have developed a modular bioinformatics pipeline to improve genome sequence assemblies, which can handle various types of data. The pipeline comprises several semi-automated tools. However, unlike classical automated tools that are based on global parameters, the semi-automated tools proposed an expert mode for a user who can decide on suggested improvements through local compromises. The pipeline was used to improve the draft genome sequence of Musa acuminata. Genotyping by sequencing (GBS) of a segregating population and paired-end sequencing were used to detect and correct scaffold misassemblies. Long insert size paired-end reads identified scaffold junctions and fusions missed by automated assembly methods. GBS markers were used to anchor scaffolds to pseudo-molecules with a new bioinformatics approach that avoids the tedious step of marker ordering during genetic map construction. Furthermore, a genome map was constructed and used to assemble scaffolds into super scaffolds. Finally, a consensus gene annotation was projected on the new assembly from two pre-existing annotations. This approach reduced the total Musa scaffold number from 7513 to 1532 (i.e. by 80%), with an N50 that increased from 1.3 Mb (65 scaffolds) to 3.0 Mb (26 scaffolds). 89.5% of the assembly was anchored to the 11 Musa chromosomes compared to the previous 70%. Unknown sites (N) were reduced from 17.3 to 10.0%. The release of the Musa acuminata reference genome version 2 provides a platform for detailed analysis of banana genome variation, function and evolution. Bioinformatics tools developed in this work can be used to improve genome sequence assemblies in

  7. Automation through the PIP [Program Implementation Plan] concurrence system improves information sharing among DOE [Dept. of Energy] managers

    International Nuclear Information System (INIS)

    Imholz, R.M.; Berube, D.S.; Peterson, J.L.

    1990-01-01

    The Program Implementation Plan (PIP) Concurrence System is designed to improve information sharing within the U.S. Department of Energy (DOE) and between DOE and the Field. Effectively sharing information enables DOE managers to make more informed, effective decisions. The PIP Concurrence System improved information sharing among DOE managers by defining the automated process for concurring on a DOE document, thus reducing the time required to concur on the document by 75%. The first step in defining an automated process is to structure the process for concurring on a document. Only those DOE managers with approved access could review certain parts of a document on a concurrence system. Remember that the concurrence process is a sign off procedure unlike a commentary process in which comments may not be restricted to certain people. The commentary process is the beginning of the concurrence process. The commentary process builds a document; the concurrence process approves it. 6 refs., 7 figs

  8. Statewide Transportation Engineering Warehouse for Archived Regional Data (STEWARD).

    Science.gov (United States)

    2009-12-01

    This report documents Phase III of the development and operation of a prototype for the Statewide Transportation : Engineering Warehouse for Archived Regional Data (STEWARD). It reflects the progress on the development and : operation of STEWARD sinc...

  9. Implemetasi Data Warehouse pada Bagian Pemasaran Perguruan Tinggi

    Directory of Open Access Journals (Sweden)

    Eka Miranda

    2012-06-01

    Full Text Available Transactional data are widely owned by higher education institutes, but the utilization of the data to support decision making has not functioned maximally. Therefore, higher education institutes need analysis tools to maximize decision making processes. Based on the issue, then data warehouse design was created to: (1 store large-amount data; (2 potentially gain new perspectives of distributed data; (3 provide reports and answers to users’ ad hoc questions; (4 perform data analysis of external conditions and transactional data from the marketing activities of universities, since marketing is one supporting field as well as the cutting edge of higher education institutes. The methods used to design and implement data warehouse are analysis of records related to the marketing activities of higher education institutes and data warehouse design. This study results in a data warehouse design and its implementation to analyze the external data and transactional data from the marketing activities of universities to support decision making.

  10. Contextual snowflake modelling for pattern warehouse logical design

    Indian Academy of Sciences (India)

    being managed by the pattern warehouse management system (PWMS) ... The authors pointed out that the necessity to find out the relationship between patterns .... (i) Some customer queries can only be satisfied by specific DM technique.

  11. Capturing Complex Multidimensional Data in Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Pedersen, Torben Bach

    2004-01-01

    Motivated by the increasing need to handle complex multidimensional data inlocation-based data warehouses, this paper proposes apowerful data model that is able to capture the complexities of such data. The model provides a foundation for handling complex transportationinfrastructures...

  12. Outpatient health care statistics data warehouse--implementation.

    Science.gov (United States)

    Zilli, D

    1999-01-01

    Data warehouse implementation is assumed to be a very knowledge-demanding, expensive and long-lasting process. As such it requires senior management sponsorship, involvement of experts, a big budget and probably years of development time. Presented Outpatient Health Care Statistics Data Warehouse implementation research provides ample evidence against the infallibility of the above statements. New, inexpensive, but powerful technology, which provides outstanding platform for On-Line Analytical Processing (OLAP), has emerged recently. Presumably, it will be the basis for the estimated future growth of data warehouse market, both in the medical and in other business fields. Methods and tools for building, maintaining and exploiting data warehouses are also briefly discussed in the paper.

  13. Minimizing Warehouse Space with a Dedicated Storage Policy

    Directory of Open Access Journals (Sweden)

    Andrea Fumi

    2013-07-01

    inevitably be supported by warehouse management system software. On the contrary, the proposed methodology relies upon a dedicated storage policy, which is easily implementable by companies of all sizes without the need for investing in expensive IT tools.

  14. Improvement of Automated POST Case Success Rate Using Support Vector Machines

    Science.gov (United States)

    Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only

  15. AST: an automated sequence-sampling method for improving the taxonomic diversity of gene phylogenetic trees.

    Science.gov (United States)

    Zhou, Chan; Mao, Fenglou; Yin, Yanbin; Huang, Jinling; Gogarten, Johann Peter; Xu, Ying

    2014-01-01

    A challenge in phylogenetic inference of gene trees is how to properly sample a large pool of homologous sequences to derive a good representative subset of sequences. Such a need arises in various applications, e.g. when (1) accuracy-oriented phylogenetic reconstruction methods may not be able to deal with a large pool of sequences due to their high demand in computing resources; (2) applications analyzing a collection of gene trees may prefer to use trees with fewer operational taxonomic units (OTUs), for instance for the detection of horizontal gene transfer events by identifying phylogenetic conflicts; and (3) the pool of available sequences is biased towards extensively studied species. In the past, the creation of subsamples often relied on manual selection. Here we present an Automated sequence-Sampling method for improving the Taxonomic diversity of gene phylogenetic trees, AST, to obtain representative sequences that maximize the taxonomic diversity of the sampled sequences. To demonstrate the effectiveness of AST, we have tested it to solve four problems, namely, inference of the evolutionary histories of the small ribosomal subunit protein S5 of E. coli, 16 S ribosomal RNAs and glycosyl-transferase gene family 8, and a study of ancient horizontal gene transfers from bacteria to plants. Our results show that the resolution of our computational results is almost as good as that of manual inference by domain experts, hence making the tool generally useful to phylogenetic studies by non-phylogeny specialists. The program is available at http://csbl.bmb.uga.edu/~zhouchan/AST.php.

  16. Fully automated whole-head segmentation with improved smoothness and continuity, with theory reviewed.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Individualized current-flow models are needed for precise targeting of brain structures using transcranial electrical or magnetic stimulation (TES/TMS. The same is true for current-source reconstruction in electroencephalography and magnetoencephalography (EEG/MEG. The first step in generating such models is to obtain an accurate segmentation of individual head anatomy, including not only brain but also cerebrospinal fluid (CSF, skull and soft tissues, with a field of view (FOV that covers the whole head. Currently available automated segmentation tools only provide results for brain tissues, have a limited FOV, and do not guarantee continuity and smoothness of tissues, which is crucially important for accurate current-flow estimates. Here we present a tool that addresses these needs. It is based on a rigorous Bayesian inference framework that combines image intensity model, anatomical prior (atlas and morphological constraints using Markov random fields (MRF. The method is evaluated on 20 simulated and 8 real head volumes acquired with magnetic resonance imaging (MRI at 1 mm3 resolution. We find improved surface smoothness and continuity as compared to the segmentation algorithms currently implemented in Statistical Parametric Mapping (SPM. With this tool, accurate and morphologically correct modeling of the whole-head anatomy for individual subjects may now be feasible on a routine basis. Code and data are fully integrated into SPM software tool and are made publicly available. In addition, a review on the MRI segmentation using atlas and the MRF over the last 20 years is also provided, with the general mathematical framework clearly derived.

  17. Fully automated whole-head segmentation with improved smoothness and continuity, with theory reviewed.

    Science.gov (United States)

    Huang, Yu; Parra, Lucas C

    2015-01-01

    Individualized current-flow models are needed for precise targeting of brain structures using transcranial electrical or magnetic stimulation (TES/TMS). The same is true for current-source reconstruction in electroencephalography and magnetoencephalography (EEG/MEG). The first step in generating such models is to obtain an accurate segmentation of individual head anatomy, including not only brain but also cerebrospinal fluid (CSF), skull and soft tissues, with a field of view (FOV) that covers the whole head. Currently available automated segmentation tools only provide results for brain tissues, have a limited FOV, and do not guarantee continuity and smoothness of tissues, which is crucially important for accurate current-flow estimates. Here we present a tool that addresses these needs. It is based on a rigorous Bayesian inference framework that combines image intensity model, anatomical prior (atlas) and morphological constraints using Markov random fields (MRF). The method is evaluated on 20 simulated and 8 real head volumes acquired with magnetic resonance imaging (MRI) at 1 mm3 resolution. We find improved surface smoothness and continuity as compared to the segmentation algorithms currently implemented in Statistical Parametric Mapping (SPM). With this tool, accurate and morphologically correct modeling of the whole-head anatomy for individual subjects may now be feasible on a routine basis. Code and data are fully integrated into SPM software tool and are made publicly available. In addition, a review on the MRI segmentation using atlas and the MRF over the last 20 years is also provided, with the general mathematical framework clearly derived.

  18. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Energy Technology Data Exchange (ETDEWEB)

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  19. A Performance Evaluation of Online Warehouse Update Algorithms

    Science.gov (United States)

    1998-01-01

    able to present a fully consistent ver- sion of the warehouse to the queries while the warehouse is being updated. Multiversioning has been used...LST97]). Special- ized multiversion access structures have also been proposed ([LS89, LS90, dBS96, BC97, VV97, MOPW98]) In the context of OLTP systems...collection processes. 2.1 Multiversioning MVNL supports multiple versions by using Time Travel ([Sto87]). Each row has two extra at- tributes, Tmin

  20. Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.

    Science.gov (United States)

    Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D

    2018-04-12

    Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13

  1. Design and Applications of a Multimodality Image Data Warehouse Framework

    Science.gov (United States)

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  2. Application of Reflectance Transformation Imaging Technique to Improve Automated Edge Detection in a Fossilized Oyster Reef

    Science.gov (United States)

    Djuricic, Ana; Puttonen, Eetu; Harzhauser, Mathias; Dorninger, Peter; Székely, Balázs; Mandic, Oleg; Nothegger, Clemens; Molnár, Gábor; Pfeifer, Norbert

    2016-04-01

    The world's largest fossilized oyster reef is located in Stetten, Lower Austria excavated during field campaigns of the Natural History Museum Vienna between 2005 and 2008. It is studied in paleontology to learn about change in climate from past events. In order to support this study, a laser scanning and photogrammetric campaign was organized in 2014 for 3D documentation of the large and complex site. The 3D point clouds and high resolution images from this field campaign are visualized by photogrammetric methods in form of digital surface models (DSM, 1 mm resolution) and orthophoto (0.5 mm resolution) to help paleontological interpretation of data. Due to size of the reef, automated analysis techniques are needed to interpret all digital data obtained from the field. One of the key components in successful automation is detection of oyster shell edges. We have tested Reflectance Transformation Imaging (RTI) to visualize the reef data sets for end-users through a cultural heritage viewing interface (RTIViewer). The implementation includes a Lambert shading method to visualize DSMs derived from terrestrial laser scanning using scientific software OPALS. In contrast to shaded RTI no devices consisting of a hardware system with LED lights, or a body to rotate the light source around the object are needed. The gray value for a given shaded pixel is related to the angle between light source and the normal at that position. Brighter values correspond to the slope surfaces facing the light source. Increasing of zenith angle results in internal shading all over the reef surface. In total, oyster reef surface contains 81 DSMs with 3 m x 2 m each. Their surface was illuminated by moving the virtual sun every 30 degrees (12 azimuth angles from 20-350) and every 20 degrees (4 zenith angles from 20-80). This technique provides paleontologists an interactive approach to virtually inspect the oyster reef, and to interpret the shell surface by changing the light source direction

  3. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  4. Can Leader–Member Exchange Contribute to Safety Performance in An Italian Warehouse?

    Directory of Open Access Journals (Sweden)

    Marco G. Mariani

    2017-05-01

    Full Text Available Introduction: The research considers safety climate in a warehouse and wants to analyze the Leader–Member Exchange (LMX role in respect to safety performance. Griffin and Neal’s safety model was adopted and Leader-Member Exchange was inserted as moderator in the relationships between safety climate and proximal antecedents (motivation and knowledge of safety performance constructs (compliance and participation.Materials and Methods: Survey data were collected from a sample of 133 full-time employees in an Italian warehouse. The statistical framework of Hayes (2013 was adopted for moderated mediation analysis.Results: Proximal antecedents partially mediated the relationship between Safety climate and safety participation, but not safety compliance. Moreover, the results from the moderation analysis showed that the Leader–Member Exchange moderated the influence of safety climate on proximal antecedents and the mediation exist only at the higher level of LMX.Conclusion: The study shows that the different aspects of leadership processes interact in explaining individual proficiency in safety practices.Practical Implications: Organizations as warehouses should improve the quality of the relationship between a leader and a subordinate based upon the dimensions of respect, trust, and obligation for high level of safety performance.

  5. Gossip Management at Universities Using Big Data Warehouse Model Integrated with a Decision Support System

    Directory of Open Access Journals (Sweden)

    Pelin Vardarlier

    2016-01-01

    Full Text Available Big Data has recently been used for many purposes like medicine, marketing and sports. It has helped improve management decisions. However, for almost each case a unique data warehouse should be built to benefit from the merits of data mining and Big Data. Hence, each time we start from scratch to form and build a Big Data Warehouse. In this study, we propose a Big Data Warehouse and a model for universities to be used for information management, to be more specific gossip management. The overall model is a decision support system that may help university administraitons when they are making decisions and also provide them with information or gossips being circulated among students and staff. In the model, unsupervised machine learning algorithms have been employed. A prototype of the proposed system has also been presented in the study. User generated data has been collected from students in order to learn gossips and students’ problems related to school, classes, staff and instructors. The findings and results of the pilot study suggest that social media messages among students may give important clues for the happenings at school and this information may be used for management purposes.The model may be developed and implemented by not only universities but also some other organisations.

  6. Implementation of Lean Warehouse to Minimize Wastes in Finished Goods Warehouse of PT Charoen Pokphand Indonesia Semarang

    Directory of Open Access Journals (Sweden)

    Nia Budi Puspitasari

    2016-03-01

    Full Text Available PT. Charoen Pokphand Indonesia Semarang is one of the largest poultry feed companies in Indonesia. To store the finished products that are ready to be distributed, it needs a finished goods warehouse. To minimize the wastes that occur in the process of warehousing the finished goods, the implementation of lean warehouse is required. The core process of finished goods warehouse is the process of putting bag that has been through the process of pallets packing, and then transporting the pallets contained bags of feed at finished goods warehouses and the process of unloading food from the finished goods warehouse to the distribution truck. With the implementation of the lean warehouse, we can know whether the activities are value added or not, to be identified later which type of waste happened. Opinions of stakeholders regarding the waste that must be eliminated first need to be determined by questionnaires. Based on the results of the questionnaires, three top wastes are selected to be identified the cause by using fishbone diagram. They can be repaired by using the implementation of 5S, namely Seiri, Seiton, Seiso, Seiketsu, and Shitsuke. Defect waste can be minimized by selecting pallet, putting sack correctly, forklift line clearance, applying working procedures, and creating cleaning schedule. Next, overprocessing waste is minimized by removing unnecessary items, putting based on the date of manufacture, and manufacture of feed plan. Inventory waste is minimized by removing junks, putting feed based on the expired date, and cleaning the barn

  7. Snow-covered Landsat time series stacks improve automated disturbance mapping accuracy in forested landscapes

    Science.gov (United States)

    Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy B. Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen

    2011-01-01

    Accurate landscape-scale maps of forests and associated disturbances are critical to augment studies on biodiversity, ecosystem services, and the carbon cycle, especially in terms of understanding how the spatial and temporal complexities of damage sustained from disturbances influence forest structure and function. Vegetation change tracker (VCT) is a highly automated...

  8. Automated Information Security Will Not Improve until Effectively Supported by IRM.

    Science.gov (United States)

    Chick, Morey J.

    1989-01-01

    The first of two articles on the nature of the growing problem of automated information systems security, especially in the federal government, this article presents a brief history of the problem and describes the need for integrating security activities into overall policies and programs to help reduce system vulnerabilities and risks. (23…

  9. Towards an Improved Pilot-Vehicle Interface for Highly Automated Aircraft: Evaluation of the Haptic Flight Control System

    Science.gov (United States)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2012-01-01

    The control automation and interaction paradigm (e.g., manual, autopilot, flight management system) used on virtually all large highly automated aircraft has long been an exemplar of breakdowns in human factors and human-centered design. An alternative paradigm is the Haptic Flight Control System (HFCS) that is part of NASA Langley Research Center s Naturalistic Flight Deck Concept. The HFCS uses only stick and throttle for easily and intuitively controlling the actual flight of the aircraft without losing any of the efficiency and operational benefits of the current paradigm. Initial prototypes of the HFCS are being evaluated and this paper describes one such evaluation. In this evaluation we examined claims regarding improved situation awareness, appropriate workload, graceful degradation, and improved pilot acceptance. Twenty-four instrument-rated pilots were instructed to plan and fly four different flights in a fictitious airspace using a moderate fidelity desktop simulation. Three different flight control paradigms were tested: Manual control, Full Automation control, and a simplified version of the HFCS. Dependent variables included both subjective (questionnaire) and objective (SAGAT) measures of situation awareness, workload (NASA-TLX), secondary task performance, time to recognize automation failures, and pilot preference (questionnaire). The results showed a statistically significant advantage for the HFCS in a number of measures. Results that were not statistically significant still favored the HFCS. The results suggest that the HFCS does offer an attractive and viable alternative to the tactical components of today s FMS/autopilot control system. The paper describes further studies that are planned to continue to evaluate the HFCS.

  10. Comprehensive automation and monitoring of MV grids as the key element of improvement of energy supply reliability and continuity

    Directory of Open Access Journals (Sweden)

    Stanisław Kubacki

    2012-03-01

    Full Text Available The paper presents the issue of comprehensive automation and monitoring of medium voltage (MV grids as a key element of the Smart Grid concept. The existing condition of MV grid control and monitoring is discussed, and the concept of a solution which will provide the possibility of remote automatic grid reconfiguration and ensure full grid observability from the dispatching system level is introduced. Automation of MV grid switching is discussed in detail to isolate a faulty line section and supply electricity at the time of the failure to the largest possible number of recipients. An example of such automation controls’ operation is also presented. The paper’s second part presents the key role of the quick fault location function and the possibility of the MV grid’s remote reconfiguration for improving power supply reliability (SAIDI and SAIFI indices. It is also shown how an increase in the number of points fitted with faulted circuit indicators with the option of remote control of switches from the dispatch system in MV grids may affect reduction of SAIDI and SAIFI indices across ENERGA-OPERATOR SA divisions.

  11. New Prototype Safeguards Technology Offers Improved Confidence and Automation for Uranium Enrichment Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Brim, Cornelia P.

    2013-04-01

    An important requirement for the international safeguards community is the ability to determine the enrichment level of uranium in gas centrifuge enrichment plants and nuclear fuel fabrication facilities. This is essential to ensure that countries with nuclear nonproliferation commitments, such as States Party to the Nuclear Nonproliferation Treaty, are adhering to their obligations. However, current technologies to verify the uranium enrichment level in gas centrifuge enrichment plants or nuclear fuel fabrication facilities are technically challenging and resource-intensive. NNSA’s Office of Nonproliferation and International Security (NIS) supports the development, testing, and evaluation of future systems that will strengthen and sustain U.S. safeguards and security capabilities—in this case, by automating the monitoring of uranium enrichment in the entire inventory of a fuel fabrication facility. One such system is HEVA—hybrid enrichment verification array. This prototype was developed to provide an automated, nondestructive assay verification technology for uranium hexafluoride (UF6) cylinders at enrichment plants.

  12. Drivability Improvement Control for Vehicle Start-Up Applied to an Automated Manual Transmission

    Directory of Open Access Journals (Sweden)

    Danna Jiang

    2017-01-01

    Full Text Available Drivability is the key factor for the automated manual transmission. It includes fast response to the driver’s demand and the driving comfort. This paper deals with a control methodology applied to an automated manual transmission vehicle for drivability enhancement during vehicle start-up phase. Based on a piecewise model of powertrain, a multiple-model predictive controller (mMPC is designed with the engine speed, clutch disc speed, and wheel speed as the measurable input variables and the engine torque reference and clutch friction torque reference as the controller’s output variables. The model not only includes the clutch dynamic, the flexible shaft dynamic, but also includes the actuators’ delay character. Considering the driver’s intention, a slipping speed trajectory is generated based on the acceleration pedal dynamically. The designed control strategy is verified on a complete powertrain and longitudinal vehicle dynamic model with different driver’s torque demands.

  13. A Semi-automated Approach to Improve the Efficiency of Medical Imaging Segmentation for Haptic Rendering.

    Science.gov (United States)

    Banerjee, Pat; Hu, Mengqi; Kannan, Rahul; Krishnaswamy, Srinivasan

    2017-08-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as magnetic resonance imaging (MRI) or computed tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating 3D models by automating the segmentation of CT images based on the pixel contrast for integrating the interface between Sensimmer and medical imaging devices, using the volumetric approach, Hough transform method, and manual centering method. Hence, automating the process has reduced the segmentation time by 56.35% while maintaining the same accuracy of the output at ±2 voxels.

  14. TRUNCATULIX--a data warehouse for the legume community.

    Science.gov (United States)

    Henckel, Kolja; Runte, Kai J; Bekel, Thomas; Dondrup, Michael; Jakobi, Tobias; Küster, Helge; Goesmann, Alexander

    2009-02-11

    Databases for either sequence, annotation, or microarray experiments data are extremely beneficial to the research community, as they centrally gather information from experiments performed by different scientists. However, data from different sources develop their full capacities only when combined. The idea of a data warehouse directly adresses this problem and solves it by integrating all required data into one single database - hence there are already many data warehouses available to genetics. For the model legume Medicago truncatula, there is currently no such single data warehouse that integrates all freely available gene sequences, the corresponding gene expression data, and annotation information. Thus, we created the data warehouse TRUNCATULIX, an integrative database of Medicago truncatula sequence and expression data. The TRUNCATULIX data warehouse integrates five public databases for gene sequences, and gene annotations, as well as a database for microarray expression data covering raw data, normalized datasets, and complete expression profiling experiments. It can be accessed via an AJAX-based web interface using a standard web browser. For the first time, users can now quickly search for specific genes and gene expression data in a huge database based on high-quality annotations. The results can be exported as Excel, HTML, or as csv files for further usage. The integration of sequence, annotation, and gene expression data from several Medicago truncatula databases in TRUNCATULIX provides the legume community with access to data and data mining capability not previously available. TRUNCATULIX is freely available at http://www.cebitec.uni-bielefeld.de/truncatulix/.

  15. Optimized Database of Higher Education Management Using Data Warehouse

    Directory of Open Access Journals (Sweden)

    Spits Warnars

    2010-04-01

    Full Text Available The emergence of new higher education institutions has created the competition in higher education market, and data warehouse can be used as an effective technology tools for increasing competitiveness in the higher education market. Data warehouse produce reliable reports for the institution’s high-level management in short time for faster and better decision making, not only on increasing the admission number of students, but also on the possibility to find extraordinary, unconventional funds for the institution. Efficiency comparison was based on length and amount of processed records, total processed byte, amount of processed tables, time to run query and produced record on OLTP database and data warehouse. Efficiency percentages was measured by the formula for percentage increasing and the average efficiency percentage of 461.801,04% shows that using data warehouse is more powerful and efficient rather than using OLTP database. Data warehouse was modeled based on hypercube which is created by limited high demand reports which usually used by high level management. In every table of fact and dimension fields will be inserted which represent the loading constructive merge where the ETL (Extraction, Transformation and Loading process is run based on the old and new files.

  16. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    International Nuclear Information System (INIS)

    Hiraki, Masahiko; Yamada, Yusuke; Chavas, Leonard M. G.; Wakatsuki, Soichi; Matsugaki, Naohiro

    2013-01-01

    A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable

  17. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    Energy Technology Data Exchange (ETDEWEB)

    Hiraki, Masahiko, E-mail: masahiko.hiraki@kek.jp; Yamada, Yusuke; Chavas, Leonard M. G. [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Wakatsuki, Soichi [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS 69, Menlo Park, CA 94025-7015 (United States); Stanford University, Beckman Center B105, Stanford, CA 94305-5126 (United States); Matsugaki, Naohiro [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2013-11-01

    A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable.

  18. Development and Testing of an Automated 4-Day Text Messaging Guidance as an Aid for Improving Colonoscopy Preparation.

    Science.gov (United States)

    Walter, Benjamin Michael; Klare, Peter; Neu, Bruno; Schmid, Roland M; von Delius, Stefan

    2016-06-21

    In gastroenterology a sufficient colon cleansing improves adenoma detection rate and prevents the need for preterm repeat colonoscopies due to invalid preparation. It has been shown that patient education is of major importance for improvement of colon cleansing. Objective of this study was to assess the function of an automated text messaging (short message service, SMS)-supported colonoscopy preparation starting 4 days before colonoscopy appointment. After preevaluation to assess mobile phone usage in the patient population for relevance of this approach, a Web-based, automated SMS text messaging system was developed, following which a single-center feasibility study at a tertiary care center was performed. Patients scheduled for outpatient colonoscopy were invited to participate. Patients enrolled in the study group received automated information about dietary recommendations and bowel cleansing during colonoscopy preparation. Data of outpatient colonoscopies with regular preparation procedure were used for pair matching and served as control. Primary end point was feasibility of SMS text messaging support in colonoscopy preparation assessed as stable and satisfactory function of the system. Secondary end points were quality of bowel preparation according to the Boston Bowel Preparation Scale (BBPS) and patient satisfaction with SMS text messaging-provided information assessed by a questionnaire. Web-based SMS text messaging-supported colonoscopy preparation was successful and feasible in 19 of 20 patients. Mean (standard error of the mean, SEM) total BBPS score was slightly higher in the SMS group than in the control group (7.3, SEM 0.3 vs 6.4, SEM 0.2) and for each colonic region (left, transverse, and right colon). Patient satisfaction regarding SMS text messaging-based information was high. Using SMS for colonoscopy preparation with 4 days' guidance including dietary recommendation is a new approach to improve colonoscopy preparation. Quality of colonoscopy

  19. Open Location Management in Automated Warehousing Systems

    OpenAIRE

    Yu, Yugang; Koster, René

    2009-01-01

    textabstractA warehouse needs to have sufficient open locations to be able to store incoming shipments of various sizes. In combination with ongoing load retrievals open locations gradually spread over the storage area. Unfavorable positions of open locations negatively impact the average load retrieval times. This paper presents a new method to manage these open locations such that the average system travel time for processing a block of storage and retrieval jobs in an automated warehousing...

  20. The Data Warehouse in Service Oriented Architectures and Network Centric Warfare

    National Research Council Canada - National Science Library

    Lenahan, Jack

    2005-01-01

    ... at all policy and command levels to support superior decision making? Analyzing the anticipated massive amount of GIG data will almost certainly require data warehouses and federated data warehouses...

  1. Hexicon 2: automated processing of hydrogen-deuterium exchange mass spectrometry data with improved deuteration distribution estimation.

    Science.gov (United States)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  2. Hexicon 2: Automated Processing of Hydrogen-Deuterium Exchange Mass Spectrometry Data with Improved Deuteration Distribution Estimation

    Science.gov (United States)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  3. Analisis Dan Perancangan Data Warehouse Pada PT Pelita Tatamas Jaya

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2010-12-01

    Full Text Available The purpose of this research is to assist in providing information to support decision-making processes in sales, purchasing and inventory control at PT Tatamas Pelita Jaya. With the support of data warehouse, business leaders can be more helpful in making decisions more quickly and precisely. Research methodology includes analysis of current systems, library research, designing a data warehousing schema using bintang. The result of this research is the availability of a data warehouse that can generate information quickly and precisely, thus helping the company in making decisions. The conclusion of this research is the application of data warehouse can be a media aide related parties on PT Tatamas Pelita Jaya in decision making. 

  4. Development of a public health reporting data warehouse: lessons learned.

    Science.gov (United States)

    Rizi, Seyed Ali Mussavi; Roudsari, Abdul

    2013-01-01

    Data warehouse projects are perceived to be risky and prone to failure due to many organizational and technical challenges. However, often iterative and lengthy processes of implementation of data warehouses at an enterprise level provide an opportunity for formative evaluation of these solutions. This paper describes lessons learned from successful development and implementation of the first phase of an enterprise data warehouse to support public health surveillance at British Columbia Centre for Disease Control. Iterative and prototyping approach to development, overcoming technical challenges of extraction and integration of data from large scale clinical and ancillary systems, a novel approach to record linkage, flexible and reusable modeling of clinical data, and securing senior management support at the right time were the main factors that contributed to the success of the data warehousing project.

  5. Aspects of Data Warehouse Technologies for Complex Web Data

    DEFF Research Database (Denmark)

    Thomsen, Christian

    This thesis is about aspects of specification and development of data warehouse technologies for complex web data. Today, large amounts of data exist in different web resources and in different formats. But it is often hard to analyze and query the often big and complex data or data about the data...... (i.e., metadata). It is therefore interesting to apply Data Warehouse (DW) technology to the data. But to apply DW technology to complex web data is not straightforward and the DW community faces new and exciting challenges. This thesis considers some of these challenges. The work leading...... to this thesis has primarily been done in relation to the project European Internet Accessibility Observatory (EIAO) where a data warehouse for accessibility data (roughly data about how usable web resources are for disabled users) has been specified and developed. But the results of the thesis can also...

  6. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  7. Robotic Automation Process - The next major revolution in terms of back office operations improvement

    Directory of Open Access Journals (Sweden)

    Anagnoste Sorin

    2017-07-01

    Full Text Available Forced to provide results consistent results to shareholders the organizations turned to Robotic Process Automation (RPA in order to tackle the following typical challenges they face: (1 Cost reduction, (2 Quality increase and (3 Faster processes. RPA is now considered the next big thing for the Shared Services Centers (SSC and Business Process Outsourced (BPO around the world, and especially in Central and Eastern Europe. In SSCs and BPOs the activities with the highest potential for automation are in finance, supply chain and in human resource departments. This means that the problems these business are facing are mostly related to high data entry volumes, high error rates, significant rework, numerous manual processes, multiple not-integrated legacy systems and high turnover due to repetitive/low value added activities. One advantage of RPA is that it can be trained by the users to undertake structured repeatable, computer based tasks interacting in the same time with multiple systems while performing complex decisions based on algorithms. By doing this, the robot can identify the exceptions for manual processing, remove idle times and keep logs of actions performed. Another advantage is that the automated solutions can work 24/7, it can be implemented fast, work with the existing architecture, cut data entry costs by up to 70% and perform at 30% of the cost of a full time employee, thus providing a quick and tangible return to organizations. For Romania, a key destination for SSCs and BPOs, this technology will make them more competitive, but also will lead to a creation of a series of high-paid jobs while eliminating the low-input jobs. The paper will analyze also the most important vendor providers of RPA solutions on the market and will provide specific case studies from different industries, thus helping future leaders and organizations taking better decisions.

  8. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    Science.gov (United States)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  9. Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.

    Science.gov (United States)

    Gader, P.

    2016-12-01

    The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for

  10. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  11. Mastering data warehouse design relational and dimensional techniques

    CERN Document Server

    Imhoff, Claudia; Geiger, Jonathan G

    2003-01-01

    A cutting-edge response to Ralph Kimball''s challenge to the data warehouse community that answers some tough questions about the effectiveness of the relational approach to data warehousingWritten by one of the best-known exponents of the Bill Inmon approach to data warehousingAddresses head-on the tough issues raised by Kimball and explains how to choose the best modeling technique for solving common data warehouse design problemsWeighs the pros and cons of relational vs. dimensional modeling techniquesFocuses on tough modeling problems, including creating and maintaining keys and modeling c

  12. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  13. Securing Document Warehouses against Brute Force Query Attacks

    Directory of Open Access Journals (Sweden)

    Sergey Vladimirovich Zapechnikov

    2017-04-01

    Full Text Available The paper presents the scheme of data management and protocols for securing document collection against adversary users who try to abuse their access rights to find out the full content of confidential documents. The configuration of secure document retrieval system is described and a suite of protocols among the clients, warehouse server, audit server and database management server is specified. The scheme makes it infeasible for clients to establish correspondence between the documents relevant to different search queries until a moderator won’t give access to these documents. The proposed solution allows ensuring higher security level for document warehouses.

  14. 7 CFR 1421.106 - Warehouse-stored marketing assistance loan collateral.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Warehouse-stored marketing assistance loan collateral... Marketing Assistance Loans § 1421.106 Warehouse-stored marketing assistance loan collateral. (a) A commodity may be pledged as collateral for a warehouse-stored marketing assistance loan in the quantity...

  15. Expanding Post-Harvest Finance Through Warehouse Receipts and Related Instruments

    OpenAIRE

    Baldwin, Marisa; Bryla, Erin; Langenbucher, Anja

    2006-01-01

    Warehouse receipt financing and similar types of collateralized lending provide an alternative to traditional lending requirements of banks and other financiers and could provide opportunities to expand this lending in emerging economies for agricultural trade. The main contents include: what is warehouse receipt financing; what is the value of warehouse receipt financing; other collater...

  16. 27 CFR 28.28 - Withdrawal of wine and distilled spirits from customs bonded warehouses.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Withdrawal of wine and... Miscellaneous Provisions Customs Bonded Warehouses § 28.28 Withdrawal of wine and distilled spirits from customs bonded warehouses. Wine and bottled distilled spirits entered into customs bonded warehouses as provided...

  17. Improving transcriptome construction in non-model organisms: integrating manual and automated gene definition in Emiliania huxleyi.

    Science.gov (United States)

    Feldmesser, Ester; Rosenwasser, Shilo; Vardi, Assaf; Ben-Dor, Shifra

    2014-02-22

    The advent of Next Generation Sequencing technologies and corresponding bioinformatics tools allows the definition of transcriptomes in non-model organisms. Non-model organisms are of great ecological and biotechnological significance, and consequently the understanding of their unique metabolic pathways is essential. Several methods that integrate de novo assembly with genome-based assembly have been proposed. Yet, there are many open challenges in defining genes, particularly where genomes are not available or incomplete. Despite the large numbers of transcriptome assemblies that have been performed, quality control of the transcript building process, particularly on the protein level, is rarely performed if ever. To test and improve the quality of the automated transcriptome reconstruction, we used manually defined and curated genes, several of them experimentally validated. Several approaches to transcript construction were utilized, based on the available data: a draft genome, high quality RNAseq reads, and ESTs. In order to maximize the contribution of the various data, we integrated methods including de novo and genome based assembly, as well as EST clustering. After each step a set of manually curated genes was used for quality assessment of the transcripts. The interplay between the automated pipeline and the quality control indicated which additional processes were required to improve the transcriptome reconstruction. We discovered that E. huxleyi has a very high percentage of non-canonical splice junctions, and relatively high rates of intron retention, which caused unique issues with the currently available tools. While individual tools missed genes and artificially joined overlapping transcripts, combining the results of several tools improved the completeness and quality considerably. The final collection, created from the integration of several quality control and improvement rounds, was compared to the manually defined set both on the DNA and

  18. Data Warehouse Emissieregistratie. A new tool to sustainability; Data Warehouse Emissieregistratie. Een nieuw instrument op weg naar duurzaamheid

    Energy Technology Data Exchange (ETDEWEB)

    Van Grootveld, G. [VROM-Inpsectie, Den Haag (Netherlands); Op den Kamp, A. [OpdenKamp Adviesgroep, Den Haag (Netherlands)

    2002-12-01

    An overview is given of the possibilities to use and search the title database which contains data on emission of pollution sources in different sectors in the Netherlands. [Dutch] De voorliggende publicatie illustreert de kracht van het Data Warehouse aan de hand van zeven voorbeelden in de hoofdstukken 3 tot en met 9. Daarbij wordt telkens ook een doorkijk naar duurzame ontwikkeling gegeven.In hoofdstuk 10 worden twee cases met een korte handleiding behandeld. In hoofdstuk 1 staat achtergrondinformatie over de milieubeleidketen en de plaats die monitoring daarin neemt. In hoofdstuk 2 worden kort de drie dimensies van het Data Warehouse en de mogelijkheden die het Data Warehouse biedt beschreven (www.emissieregistratie.nl)

  19. Improving the Operations of the Earth Observing One Mission via Automated Mission Planning

    Science.gov (United States)

    Chien, Steve A.; Tran, Daniel; Rabideau, Gregg; Schaffer, Steve; Mandl, Daniel; Frye, Stuart

    2010-01-01

    We describe the modeling and reasoning about operations constraints in an automated mission planning system for an earth observing satellite - EO-1. We first discuss the large number of elements that can be naturally represented in an expressive planning and scheduling framework. We then describe a number of constraints that challenge the current state of the art in automated planning systems and discuss how we modeled these constraints as well as discuss tradeoffs in representation versus efficiency. Finally we describe the challenges in efficiently generating operations plans for this mission. These discussions involve lessons learned from an operations model that has been in use since Fall 2004 (called R4) as well as a newer more accurate operations model operational since June 2009 (called R5). We present analysis of the R5 software documenting a significant (greater than 50%) increase in the number of weekly observations scheduled by the EO-1 mission. We also show that the R5 mission planning system produces schedules within 15% of an upper bound on optimal schedules. This operational enhancement has created value of millions of dollars US over the projected remaining lifetime of the EO-1 mission.

  20. Automated Detection of Malarial Retinopathy in Digital Fundus Images for Improved Diagnosis in Malawian Children with Clinically Defined Cerebral Malaria

    Science.gov (United States)

    Joshi, Vinayak; Agurto, Carla; Barriga, Simon; Nemeth, Sheila; Soliz, Peter; MacCormick, Ian J.; Lewallen, Susan; Taylor, Terrie E.; Harding, Simon P.

    2017-02-01

    Cerebral malaria (CM), a complication of malaria infection, is the cause of the majority of malaria-associated deaths in African children. The standard clinical case definition for CM misclassifies ~25% of patients, but when malarial retinopathy (MR) is added to the clinical case definition, the specificity improves from 61% to 95%. Ocular fundoscopy requires expensive equipment and technical expertise not often available in malaria endemic settings, so we developed an automated software system to analyze retinal color images for MR lesions: retinal whitening, vessel discoloration, and white-centered hemorrhages. The individual lesion detection algorithms were combined using a partial least square classifier to determine the presence or absence of MR. We used a retrospective retinal image dataset of 86 pediatric patients with clinically defined CM (70 with MR and 16 without) to evaluate the algorithm performance. Our goal was to reduce the false positive rate of CM diagnosis, and so the algorithms were tuned at high specificity. This yielded sensitivity/specificity of 95%/100% for the detection of MR overall, and 65%/94% for retinal whitening, 62%/100% for vessel discoloration, and 73%/96% for hemorrhages. This automated system for detecting MR using retinal color images has the potential to improve the accuracy of CM diagnosis.

  1. Data Warehouse for support to the electric energy commercialization; Data Warehouse para apoio a comercializacao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Lanzotti, Carla R.; Correia, Paulo B. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Faculdade de Engenharia Mecanica]. E-mails: clanzotti@yahoo.com; pcorreia@fem.unicamp.br

    2006-07-01

    This paper specifies data base using a data warehouse containing the energy market, the electric system, and the economy information allowing the visualization and analysis of the data through tables and dynamic charts. This data warehouse corresponds to the module 'Information base from Platform helping the electric power commercialization'. The platform is a computer program viewing to help the interested agents in commercializing energy and is formed by three modules as follows: Information Base, Contracting Strategies and Contracting Process. It is expected that the use os these data base, joint to Platform establishes positive conditions to agents from the interested in electric energy commercialization.

  2. Geena 2, improved automated analysis of MALDI/TOF mass spectra.

    Science.gov (United States)

    Romano, Paolo; Profumo, Aldo; Rocco, Mattia; Mangerini, Rosa; Ferri, Fabio; Facchiano, Angelo

    2016-03-02

    Mass spectrometry (MS) is producing high volumes of data supporting oncological sciences, especially for translational research. Most of related elaborations can be carried out by combining existing tools at different levels, but little is currently available for the automation of the fundamental steps. For the analysis of MALDI/TOF spectra, a number of pre-processing steps are required, including joining of isotopic abundances for a given molecular species, normalization of signals against an internal standard, background noise removal, averaging multiple spectra from the same sample, and aligning spectra from different samples. In this paper, we present Geena 2, a public software tool for the automated execution of these pre-processing steps for MALDI/TOF spectra. Geena 2 has been developed in a Linux-Apache-MySQL-PHP web development environment, with scripts in PHP and Perl. Input and output are managed as simple formats that can be consumed by any database system and spreadsheet software. Input data may also be stored in a MySQL database. Processing methods are based on original heuristic algorithms which are introduced in the paper. Three simple and intuitive web interfaces are available: the Standard Search Interface, which allows a complete control over all parameters, the Bright Search Interface, which leaves to the user the possibility to tune parameters for alignment of spectra, and the Quick Search Interface, which limits the number of parameters to a minimum by using default values for the majority of parameters. Geena 2 has been utilized, in conjunction with a statistical analysis tool, in three published experimental works: a proteomic study on the effects of long-term cryopreservation on the low molecular weight fraction of serum proteome, and two retrospective serum proteomic studies, one on the risk of developing breat cancer in patients affected by gross cystic disease of the breast (GCDB) and the other for the identification of a predictor of

  3. A Case Study Improvement of a Testing Process by Combining Lean Management, Industrial Engineering and Automation Methods

    Directory of Open Access Journals (Sweden)

    Simon Withers

    2013-07-01

    Full Text Available Increasingly competitive market environments have forced not only large manufacturing, but also smalland-medium size enterprises (SME to look for means to improve their operations in order to increase competitive strength. This paper presents an adaptation and adoption by a UK SME engineering service organisation, of lean management, industrial engineering, and automation metods developed within larger organisations. This SME sought to improve the overall performance of one of its core testing processes. An exploratory analysis, based on the lean management concept of “value added” and work measurement technique “time study”, was developed and carried out in order to understand the current performance of a testing process for gas turbine fuel flow dividers. A design for the automation of some operations of the testing process was followed as an approach to reduce non-value added activities, and improve the overall efficiency of the testing process. The overall testing time was reduced from 12.41 to 7.93 hours (36.09 percent while the man hours and non-value added time were also reduced from 23.91 to 12.94 hours (45.87 percent and from 11.08 to 6.69 (39.67 percent hours respectively. This resulted in an increase in process efficiency in terms of man hours from 51.91 to 61.28 percent. The contribution of this paper resides in presenting a case study that can be used as a guiding reference for managers and engineers to undertake improvement projects, in their organisations, similar to the one presented in this paper.

  4. An improved automated method for the measurement of red cell 2,3-diphosphoglycerate.

    Science.gov (United States)

    Purcell, Y; Brozović, B

    1976-01-01

    A modified automated colorimetric micromethod for the determination of red cell 2,3-diphosphoglycerate (2,3-DPG) adapted from that of Grisolia et al (1969) is described. In the modified method, ethylenediaminetetra-acetic acid (EDTA) is not used and consequently concentrations of several reagents are changed. During the development of the method it was found that the presence of EDTA, either in the blood or in reagents, consistently reduced the measured value of 2,3-DPG by 15%. This effect of EDTA, not previously recognized, is independent of the EDTA concentration within the range of 5 to 50 mmol/1 and is at present unexplianed. In normal subjects (41 men and 30 women) the mean red cell 2,3-DPG was 14-5 mol/g haemoglobin (range 12-1-18-1 mol/g haemoglobin). There was no significant difference in 2,3-DPG concentrations between male and female subjects. PMID:827552

  5. Foreign object detection and removal to improve automated analysis of chest radiographs

    International Nuclear Information System (INIS)

    Hogeweg, Laurens; Sánchez, Clara I.; Melendez, Jaime; Maduskar, Pragnya; Ginneken, Bram van; Story, Alistair; Hayward, Andrew

    2013-01-01

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The method is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A z value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis

  6. A Relevance-Extended Multi-dimensional Model for a Data Warehouse Contextualized with Documents

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Pedersen, Torben Bach; Berlanga, Rafael

    2005-01-01

    Current data warehouse and OLAP technologies can be applied to analyze the structured data that companies store in their databases. The circumstances that describe the context associated with these data can be found in other internal and external sources of documents. In this paper we propose...... to combine the traditional corporate data warehouse with a document warehouse, resulting in a contextualized warehouse. Thus, contextualized warehouses keep a historical record of the facts and their contexts as described by the documents. In this framework, the user selects an analysis context which...

  7. 19 CFR 19.13 - Requirements for establishment of warehouse.

    Science.gov (United States)

    2010-04-01

    ...; DEPARTMENT OF THE TREASURY CUSTOMS WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE THEREIN... secured area separated from the remainder of the premises to be used exclusively for the storage of imported merchandise, domestic spirits, and merchandise subject to internal-revenue tax transferred into...

  8. Risk control for staff planning in e-commerce warehouses

    NARCIS (Netherlands)

    Wruck, Susanne; Vis, Iris F A; Boter, Jaap

    2016-01-01

    Internet sale supply chains often need to fulfil quickly small orders for many customers. The resulting high demand and planning uncertainties pose new challenges for e-commerce warehouse operations. Here, we develop a decision support tool to assist managers in selecting appropriate risk policies

  9. A Framework for Designing a Healthcare Outcome Data Warehouse

    Science.gov (United States)

    Parmanto, Bambang; Scotch, Matthew; Ahmad, Sjarif

    2005-01-01

    Many healthcare processes involve a series of patient visits or a series of outcomes. The modeling of outcomes associated with these types of healthcare processes is different from and not as well understood as the modeling of standard industry environments. For this reason, the typical multidimensional data warehouse designs that are frequently seen in other industries are often not a good match for data obtained from healthcare processes. Dimensional modeling is a data warehouse design technique that uses a data structure similar to the easily understood entity-relationship (ER) model but is sophisticated in that it supports high-performance data access. In the context of rehabilitation services, we implemented a slight variation of the dimensional modeling technique to make a data warehouse more appropriate for healthcare. One of the key aspects of designing a healthcare data warehouse is finding the right grain (scope) for different levels of analysis. We propose three levels of grain that enable the analysis of healthcare outcomes from highly summarized reports on episodes of care to fine-grained studies of progress from one treatment visit to the next. These grains allow the database to support multiple levels of analysis, which is imperative for healthcare decision making. PMID:18066371

  10. How Can My State Benefit from an Educational Data Warehouse?

    Science.gov (United States)

    Bergner, Terry; Smith, Nancy J.

    2007-01-01

    Imagine if, at the start of the school year, a teacher could have detailed information about the academic history of every student in her or his classroom. This is possible if the teacher can log on to a Web site that provides access to an educational data warehouse. The teacher would see not only several years of state assessment results, but…

  11. Developing and Marketing a Client/Server-Based Data Warehouse.

    Science.gov (United States)

    Singleton, Michele; And Others

    1993-01-01

    To provide better access to information, the University of Arizona information technology center has designed a data warehouse accessible from the desktop computer. A team approach has proved successful in introducing and demonstrating a prototype to the campus community. (Author/MSE)

  12. Data-driven warehouse optimization : Deploying skills of order pickers

    NARCIS (Netherlands)

    M. Matusiak (Marek); M.B.M. de Koster (René); J. Saarinen (Jari)

    2015-01-01

    textabstractBatching orders and routing order pickers is a commonly studied problem in many picker-to-parts warehouses. The impact of individual differences in picking skills on performance has received little attention. In this paper, we show that taking into account differences in the skills of

  13. A novel approach for intelligent distribution of data warehouses

    Directory of Open Access Journals (Sweden)

    Abhay Kumar Agarwal

    2016-07-01

    Full Text Available With the continuous growth in the amount of data, data storage systems have come a long way from flat files systems to RDBMS, Data Warehousing (DW and Distributed Data Warehousing systems. This paper proposes a new distributed data warehouse model. The model is built on a novel approach, for the intelligent distribution of data warehouse. Overall the model is named as Intelligent and Distributed Data Warehouse (IDDW. The proposed model has N-levels and is based on top-down hierarchical design approach of building distributed data warehouse. The building process of IDDW starts with the identification of various locations where DW may be built. Initially, a single location is considered at top-most level of IDDW where DW is built. Thereafter, DW at any other location of any level may be built. A method, to transfer concerned data from any upper level DW to concerned lower level DW, is also presented in the paper. The paper also presents IDDW modeling, its architecture based on modeling, the internal organization of IDDW via which all the operations within IDDW are performed.

  14. Event-Entity-Relationship Modeling in Data Warehouse Environments

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    We use the event-entity-relationship model (EVER) to illustrate the use of entity-based modeling languages for conceptual schema design in data warehouse environments. EVER is a general-purpose information modeling language that supports the specification of both general schema structures and multi...

  15. A Foundation for Spatial Data Warehouses on the Semantic Web


    DEFF Research Database (Denmark)

    Gur, Nurefsan; Pedersen, Torben Bach; Zimaányi, Esteban

    2017-01-01

    Large volumes of geospatial data are being published on the Semantic Web (SW), yielding a need for advanced analysis of such data. However, existing SW technologies only support advanced analytical concepts such as multidimensional (MD) data warehouses and Online Analytical Processing (OLAP) over...

  16. On sustainable operation of warehouse order picking systems

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.

    2009-01-01

    Sustainable development calls for an efficient utilization of natural and human resources. This issue also arises for warehouse systems, where typically extensive capital investment and labor intensive work are involved. It is therefore important to assess and continuously monitor the performance of

  17. A framework for designing a healthcare outcome data warehouse.

    Science.gov (United States)

    Parmanto, Bambang; Scotch, Matthew; Ahmad, Sjarif

    2005-09-06

    Many healthcare processes involve a series of patient visits or a series of outcomes. The modeling of outcomes associated with these types of healthcare processes is different from and not as well understood as the modeling of standard industry environments. For this reason, the typical multidimensional data warehouse designs that are frequently seen in other industries are often not a good match for data obtained from healthcare processes. Dimensional modeling is a data warehouse design technique that uses a data structure similar to the easily understood entity-relationship (ER) model but is sophisticated in that it supports high-performance data access. In the context of rehabilitation services, we implemented a slight variation of the dimensional modeling technique to make a data warehouse more appropriate for healthcare. One of the key aspects of designing a healthcare data warehouse is finding the right grain (scope) for different levels of analysis. We propose three levels of grain that enable the analysis of healthcare outcomes from highly summarized reports on episodes of care to fine-grained studies of progress from one treatment visit to the next. These grains allow the database to support multiple levels of analysis, which is imperative for healthcare decision making.

  18. TRUNCATULIX – a data warehouse for the legume community

    Directory of Open Access Journals (Sweden)

    Runte Kai J

    2009-02-01

    Full Text Available Abstract Background Databases for either sequence, annotation, or microarray experiments data are extremely beneficial to the research community, as they centrally gather information from experiments performed by different scientists. However, data from different sources develop their full capacities only when combined. The idea of a data warehouse directly adresses this problem and solves it by integrating all required data into one single database – hence there are already many data warehouses available to genetics. For the model legume Medicago truncatula, there is currently no such single data warehouse that integrates all freely available gene sequences, the corresponding gene expression data, and annotation information. Thus, we created the data warehouse TRUNCATULIX, an integrative database of Medicago truncatula sequence and expression data. Results The TRUNCATULIX data warehouse integrates five public databases for gene sequences, and gene annotations, as well as a database for microarray expression data covering raw data, normalized datasets, and complete expression profiling experiments. It can be accessed via an AJAX-based web interface using a standard web browser. For the first time, users can now quickly search for specific genes and gene expression data in a huge database based on high-quality annotations. The results can be exported as Excel, HTML, or as csv files for further usage. Conclusion The integration of sequence, annotation, and gene expression data from several Medicago truncatula databases in TRUNCATULIX provides the legume community with access to data and data mining capability not previously available. TRUNCATULIX is freely available at http://www.cebitec.uni-bielefeld.de/truncatulix/.

  19. Full-text automated detection of surgical site infections secondary to neurosurgery in Rennes, France.

    Science.gov (United States)

    Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc

    2013-01-01

    The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.

  20. Artificial Neural Network for Total Laboratory Automation to Improve the Management of Sample Dilution.

    Science.gov (United States)

    Ialongo, Cristiano; Pieri, Massimo; Bernardini, Sergio

    2017-02-01

    Diluting a sample to obtain a measure within the analytical range is a common task in clinical laboratories. However, for urgent samples, it can cause delays in test reporting, which can put patients' safety at risk. The aim of this work is to show a simple artificial neural network that can be used to make it unnecessary to predilute a sample using the information available through the laboratory information system. Particularly, the Multilayer Perceptron neural network built on a data set of 16,106 cardiac troponin I test records produced a correct inference rate of 100% for samples not requiring predilution and 86.2% for those requiring predilution. With respect to the inference reliability, the most relevant inputs were the presence of a cardiac event or surgery and the result of the previous assay. Therefore, such an artificial neural network can be easily implemented into a total automation framework to sensibly reduce the turnaround time of critical orders delayed by the operation required to retrieve, dilute, and retest the sample.

  1. An Improved Data Warehouse Architecture for SPGS, MAUTECH ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-06-01

    Jun 1, 2015 ... gain any strategic advantage out of the data. DW is becoming ... business oriented organizations and the analyses that are ... incremental solutions with the strategic architecture .... issue impacting the credibility of reporting. iv.

  2. The Analytic Information Warehouse (AIW): a platform for analytics using electronic health record data.

    Science.gov (United States)

    Post, Andrew R; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H

    2013-06-01

    To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in 5years of data from our institution's clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. The Analytic Information Warehouse (AIW): a Platform for Analytics using Electronic Health Record Data

    Science.gov (United States)

    Post, Andrew R.; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H.

    2013-01-01

    Objective To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. Materials and Methods We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. Results We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30 days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in five years of data from our institution’s clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. Discussion and Conclusion A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. PMID:23402960

  4. Automating occupational protection records systems

    International Nuclear Information System (INIS)

    Lyon, M.; Martin, J.B.

    1991-10-01

    Occupational protection records have traditionally been generated by field and laboratory personnel, assembled into files in the safety office, and eventually stored in a warehouse or other facility. Until recently, these records have been primarily paper copies, often handwritten. Sometimes, the paper is microfilmed for storage. However, electronic records are beginning to replace these traditional methods. The purpose of this paper is to provide guidance for making the transition to automated record keeping and retrieval using modern computer equipment. This paper describes the types of records most readily converted to electronic record keeping and a methodology for implementing an automated record system. The process of conversion is based on a requirements analysis to assess program needs and a high level of user involvement during the development. The importance of indexing the hard copy records for easy retrieval is also discussed. The concept of linkage between related records and its importance relative to reporting, research, and litigation will be addressed. 2 figs

  5. Using Remotely Sensed Data to Automate and Improve Census Bureau Update Activities

    Science.gov (United States)

    Desch, A., IV

    2017-12-01

    Location of established and new housing structures is fundamental in the Census Bureau's planning and execution of each decennial census. Past Census address list compilation and update programs have involved sending more than 100,000 workers into the field to find and verify housing units. The 2020 Census program has introduced an imagery based In-Office Address Canvassing Interactive Review (IOAC-IR) program in an attempt to reduce the in-field workload. The human analyst driven, aerial image based IOAC-IR operation has proven to be a cost effective and accurate substitute for a large portion of the expensive in-field address canvassing operations. However, the IOAC-IR still required more than a year to complete and over 100 full-time dedicated employees. Much of the basic image analysis work done in IOAC-IR can be handled with established remote sensing and computer vision techniques. The experience gained from the Interactive Review phase of In-Office Address Canvassing has led to the development of a prototype geo-processing tool to automate much of this process for future and ongoing Address Canvassing operations. This prototype utilizes high-resolution aerial imagery and LiDAR to identify structures and compare their location to existing Census geographic information. In this presentation, we report on the comparison of this exploratory system's results to the human based IOAC-IR. The experimental image and LiDAR based change detection approach has itself led to very promising follow-on experiments utilizing very current, high repeat datasets and scalable cloud computing. We will discuss how these new techniques can be used to both aid the US Census Bureau meet its goals of identify all the housing units in the US as well as aid developing countries better identify where there population is currently distributed.

  6. Automated degenerate PCR primer design for high-throughput sequencing improves efficiency of viral sequencing

    Directory of Open Access Journals (Sweden)

    Li Kelvin

    2012-11-01

    Full Text Available Abstract Background In a high-throughput environment, to PCR amplify and sequence a large set of viral isolates from populations that are potentially heterogeneous and continuously evolving, the use of degenerate PCR primers is an important strategy. Degenerate primers allow for the PCR amplification of a wider range of viral isolates with only one set of pre-mixed primers, thus increasing amplification success rates and minimizing the necessity for genome finishing activities. To successfully select a large set of degenerate PCR primers necessary to tile across an entire viral genome and maximize their success, this process is best performed computationally. Results We have developed a fully automated degenerate PCR primer design system that plays a key role in the J. Craig Venter Institute’s (JCVI high-throughput viral sequencing pipeline. A consensus viral genome, or a set of consensus segment sequences in the case of a segmented virus, is specified using IUPAC ambiguity codes in the consensus template sequence to represent the allelic diversity of the target population. PCR primer pairs are then selected computationally to produce a minimal amplicon set capable of tiling across the full length of the specified target region. As part of the tiling process, primer pairs are computationally screened to meet the criteria for successful PCR with one of two described amplification protocols. The actual sequencing success rates for designed primers for measles virus, mumps virus, human parainfluenza virus 1 and 3, human respiratory syncytial virus A and B and human metapneumovirus are described, where >90% of designed primer pairs were able to consistently successfully amplify >75% of the isolates. Conclusions Augmenting our previously developed and published JCVI Primer Design Pipeline, we achieved similarly high sequencing success rates with only minor software modifications. The recommended methodology for the construction of the consensus

  7. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

    Science.gov (United States)

    Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier

    2017-07-15

    In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    Science.gov (United States)

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.

  9. NASA Spinoff Article: Automated Procedures To Improve Safety on Oil Rigs

    Science.gov (United States)

    Garud, Sumedha

    2013-01-01

    On May 11th, 2013, two astronauts emerged from the interior of the International Space Station (ISS) and worked their way toward the far end of spacecraft. Over the next 51/2 hours, the two replaced an ammonia pump that had developed a significant leak a few days before. On the ISS, ammonia serves the vital role of cooling components-in this case, one of the station's eight solar arrays. Throughout the extravehicular activity (EVA), the astronauts stayed in constant contact with mission control: every movement, every action strictly followed a carefully planned set of procedures to maximize crew safety and the chances of success. Though the leak had come as a surprise, NASA was prepared to handle it swiftly thanks in part to the thousands of procedures that have been written to cover every aspect of the ISS's operations. The ISS is not unique in this regard: Every NASA mission requires well-written procedures-or detailed lists of step-by-step instructions-that cover how to operate equipment in any scenario, from normal operations to the challenges created by malfunctioning hardware or software. Astronauts and mission control train and drill extensively in procedures to ensure they know what the proper procedures are and when they should be used. These procedures used to be exclusively written on paper, but over the past decade, NASA has transitioned to digital formats. Electronic-based documentation simplifies storage and use, allowing astronauts and flight controllers to find instructions more quickly and display them through a variety of media. Electronic procedures are also a crucial step toward automation: once instructions are digital, procedure display software can be designed to assist in authoring, reviewing, and even executing them.

  10. Automated innovative diagnostic, data management and communication tool, for improving malaria vector control in endemic settings.

    Science.gov (United States)

    Vontas, John; Mitsakakis, Konstantinos; Zengerle, Roland; Yewhalaw, Delenasaw; Sikaala, Chadwick Haadezu; Etang, Josiane; Fallani, Matteo; Carman, Bill; Müller, Pie; Chouaïbou, Mouhamadou; Coleman, Marlize; Coleman, Michael

    2016-01-01

    Malaria is a life-threatening disease that caused more than 400,000 deaths in sub-Saharan Africa in 2015. Mass prevention of the disease is best achieved by vector control which heavily relies on the use of insecticides. Monitoring mosquito vector populations is an integral component of control programs and a prerequisite for effective interventions. Several individual methods are used for this task; however, there are obstacles to their uptake, as well as challenges in organizing, interpreting and communicating vector population data. The Horizon 2020 project "DMC-MALVEC" consortium will develop a fully integrated and automated multiplex vector-diagnostic platform (LabDisk) for characterizing mosquito populations in terms of species composition, Plasmodium infections and biochemical insecticide resistance markers. The LabDisk will be interfaced with a Disease Data Management System (DDMS), a custom made data management software which will collate and manage data from routine entomological monitoring activities providing information in a timely fashion based on user needs and in a standardized way. The ResistanceSim, a serious game, a modern ICT platform that uses interactive ways of communicating guidelines and exemplifying good practices of optimal use of interventions in the health sector will also be a key element. The use of the tool will teach operational end users the value of quality data (relevant, timely and accurate) to make informed decisions. The integrated system (LabDisk, DDMS & ResistanceSim) will be evaluated in four malaria endemic countries, representative of the vector control challenges in sub-Saharan Africa, (Cameroon, Ivory Coast, Ethiopia and Zambia), highly representative of malaria settings with different levels of endemicity and vector control challenges, to support informed decision-making in vector control and disease management.

  11. Automated Area Beam Equalization Mammography for Improved Imaging of Dense Breasts

    National Research Council Canada - National Science Library

    Molloi, Sabee

    2005-01-01

    ...) because of degraded contrast from large scatter intensities and relatively high noise. Area x-ray beam equalization can improve image quality by increasing the x-ray exposure to underpenetrated regions without increasing the exposure to the breast regions...

  12. Data warehouse governance programs in healthcare settings: a literature review and a call to action.

    Science.gov (United States)

    Elliott, Thomas E; Holmes, John H; Davidson, Arthur J; La Chance, Pierre-Andre; Nelson, Andrew F; Steiner, John F

    2013-01-01

    Given the extensive data stored in healthcare data warehouses, data warehouse governance policies are needed to ensure data integrity and privacy. This review examines the current state of the data warehouse governance literature as it applies to healthcare data warehouses, identifies knowledge gaps, provides recommendations, and suggests approaches for further research. A comprehensive literature search using five data bases, journal article title-search, and citation searches was conducted between 1997 and 2012. Data warehouse governance documents from two healthcare systems in the USA were also reviewed. A modified version of nine components from the Data Governance Institute Framework for data warehouse governance guided the qualitative analysis. Fifteen articles were retrieved. Only three were related to healthcare settings, each of which addressed only one of the nine framework components. Of the remaining 12 articles, 10 addressed between one and seven framework components and the remainder addressed none. Each of the two data warehouse governance plans obtained from healthcare systems in the USA addressed a subset of the framework components, and between them they covered all nine. While published data warehouse governance policies are rare, the 15 articles and two healthcare organizational documents reviewed in this study may provide guidance to creating such policies. Additional research is needed in this area to ensure that data warehouse governance polices are feasible and effective. The gap between the development of data warehouses in healthcare settings and formal governance policies is substantial, as evidenced by the sparse literature in this domain.

  13. Computational intelligence for qualitative coaching diagnostics: Automated assessment of tennis swings to improve performance and safety

    OpenAIRE

    Bačić, Boris; Hume, Patria

    2017-01-01

    Coaching technology, wearables and exergames can provide quantitative feedback based on measured activity, but there is little evidence of qualitative feedback to aid technique improvement. To achieve personalised qualitative feedback, we demonstrated a proof-of-concept prototype combining kinesiology and computational intelligence that could help improving tennis swing technique. Three-dimensional tennis motion data were acquired from multi-camera video (22 backhands and 21 forehands, includ...

  14. Optimization of warehouse location through fuzzy multi-criteria decision making methods

    Directory of Open Access Journals (Sweden)

    C. L. Karmaker

    2015-07-01

    Full Text Available Strategic warehouse location-allocation problem is a multi-staged decision-making problem having both numerical and qualitative criteria. In order to survive in the global business scenario by improving supply chain performance, companies must examine the cross-functional drivers in the optimization of logistic systems. A meticulous observation makes evident that strategy warehouse location selection has become challenging as the number of alternatives and conflicting criteria increases. The issue becomes particularly problematic when the conventional concept has been applied in dealing with the imprecise nature of the linguistic assessment. The qualitative decisions for selection process are often complicated by the fact that often it is imprecise for the decision makers. Such problem must be overcome with defined efforts. Fuzzy multi-criteria decision making methods have been used in this research as aids in making location-allocation decisions. The anticipated methods in this research consist of two steps at its core. In the first step, the criteria of the existing problem are inspected and identified and then the weights of the sector and subsector are determined that have come to light by using Fuzzy AHP. In the second step, eligible alternatives are ranked by using TOPSIS and Fuzzy TOPSIS comparatively. A demonstration of the application of these methodologies in a real life problem is presented.

  15. PROCESS APPROACH IN MANAGEMENT OF ACTIVITY LOGISTIC OPERATOR OF WAREHOUSE SERVICES

    Directory of Open Access Journals (Sweden)

    Goryacheva Natalia Evgenievna

    2013-03-01

    Full Text Available In the present article methodological and practical aspects of improvement of activity of the logistic operator of warehouse services on the basis of use of a process approach in management of the organization are considered. The object of the research is justification of necessity of process approach’s in logistic operation’s activity and development of algorithm of their work at clients’ business processes and approbation of assessment technique of logistics service efficiency. Scientific novelty of this article is that it is the first time when models of business processes of warehouse logistics one developed. Also algorithmization and standartization are carried out and indicators of an assessment of efficiency of rendered logistic service are developed. Use of process approach in logistics operator activity gives him powerful advantages in competitive rivalry. The increase of efficiency of logistic operator’s commercial activity (LLC Company «Logy Log» Nizhny Novgorod after the transition in management from functions to processes fully confirms this statement. Consolidation of logistic operator’s positions in different functional areas becomes an incentive to development of logistics and as a result to reduction of costs at delivery of industrial freights and consumer goods to the final user.

  16. PROCESS APPROACH IN MANAGEMENT OF ACTIVITY LOGISTIC OPERATOR OF WAREHOUSE SERVICES

    Directory of Open Access Journals (Sweden)

    Наталья Евгеньевна Горячева

    2013-04-01

    Full Text Available In the present article methodological and practical aspects of improvement of activity of the logistic operator of warehouse services on the basis of use of a process approach in management of the organization are considered.The object of the research is justification of necessity of process approach’s in logistic operation’s activity and development of algorithm of their work at clients’ business processes and approbation of assessment technique of logistics service efficiency.Scientific novelty of this article is that it is the first time when models of business processes of warehouse logistics one developed. Also algorithmization and standartization are carried out and indicators of an assessment of efficiency of rendered logistic service are developed.Use of process approach in logistics operator activity gives him powerful advantages in competitive rivalry. The increase of efficiency of logistic operator’s commercial activity (LLC Company «Logy Log» Nizhny Novgorod after the transition in management from functions to processes fully confirms this statement.Consolidation of logistic operator’s positions in different functional areas becomes an incentive to development of logistics and as a result to reduction of costs at delivery of industrial freights and consumer goods to the final user.DOI: http://dx.doi.org/10.12731/2218-7405-2013-3-30

  17. The ergonomics body posture on repetitive and heavy lifting activities of workers in aerospace manufacturing warehouse

    Science.gov (United States)

    Kamat, S. R.; Zula, N. E. N. Md; Rayme, N. S.; Shamsuddin, S.; Husain, K.

    2017-06-01

    Warehouse is an important entity in manufacturing organizations. It usually involves working activities that relate ergonomics risk factors including repetitive and heavy lifting activities. Aerospace manufacturing workers are prone of having musculoskeletal disorder (MSD) problems because of the manual handling activities. From the questionnaires is states that the workers may have experience discomforts experience during manual handling work. Thus, the objectives of this study are; to investigate the body posture and analyze the level of discomfort for body posture of the workers while performing the repetitive and heavy lifting activities that cause MSD problems and to suggest proper body posture and alternatives to reduce the MSD related problems. Methodology of this study involves interviews, questionnaires distribution, anthropometry measurements, RULA (Right Upper Limb Assessment) assessment sheet and CATIA V5 RULA analysis, NIOSH lifting index (LI) and recommended weight limit (RWL). Ten workers are selected for pilot study and as for anthropometry measurement all workers in the warehouse department were involved. From the first pilot study, the RULA assessment score in CATIA V5 shows the highest score which is 7 for all postures and results after improvement of working posture is very low hence, detecting weight of the material handling is not in recommendation. To reduce the risk of MSD through the improvisation of working posture, the weight limit is also calculated in order to have a RWL for each worker. Therefore, proposing a guideline for the aerospace workers involved with repetitive movement and excessive lifting will help in reducing the risk of getting MSD.

  18. Improvement and multicenter evaluation of the analytical performance of an automated chemiluminescent immunoassay for alpha fetoprotein

    NARCIS (Netherlands)

    Morota, Kaori; Komori, Makoto; Fujinami, Ryo; Yamada, Koji; Kuribayashi, Kageaki; Watanabe, Naoki; Sokoll, Lori J.; Elliott, Debra; Chan, Daniel W.; Martens, Frans; Heijboer, Annemieke C.; Blankenstein, Marinus A.; Hershberger, Stefan J.; Pfeiffer, Zachary A.; Vaidya, Shyam V.; Dowell, Barry L.

    2012-01-01

    A new ARCHITECT® alpha fetoprotein (AFP) assay was developed to improve the linearity at the upper end of the calibration curve and to enhance other performance characteristics. In addition, this reformulation eliminated the possibility of falsely depressed samples at high AFP concentrations. The

  19. Automated Area Beam Equalization Mammography for Improved Imaging of Dense Breast

    National Research Council Canada - National Science Library

    Molloi, Sabee

    2004-01-01

    ...) because of degraded contrast from large scatter intensities and relatively high noise. Area x-ray beam equalization can improve image quality by increasing the x-ray exposure to under-penetrated regions without increasing the exposure to other breast regions...

  20. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    Science.gov (United States)

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.

  1. Aspects of Data Warehouse Technologies for Complex Web Data

    OpenAIRE

    Thomsen, Christian

    2008-01-01

    This thesis is about aspects of specification and development of datawarehouse technologies for complex web data. Today, large amounts of dataexist in different web resources and in different formats. But it is oftenhard to analyze and query the often big and complex data or data about thedata (i.e., metadata). It is therefore interesting to apply Data Warehouse(DW) technology to the data. But to apply DW technology to complex web datais not straightforward and the DW community faces new and ...

  2. Improved plan quality with automated radiotherapy planning for whole brain with hippocampus sparing: a comparison to the RTOG 0933 trial.

    Science.gov (United States)

    Krayenbuehl, J; Di Martino, M; Guckenberger, M; Andratschke, N

    2017-10-02

    Whole-brain radiation therapy (WBRT) with hippocampus sparing (HS) has been investigated by the radiation oncology working group (RTOG) 0933 trial for patients with multiple brain metastases. They showed a decrease of adverse neurocognitive effects with HS WBRT compared to WBRT alone. With the development of automated treatment planning system (aTPS) in the last years, a standardization of the plan quality at a high level was achieved. The goal of this study was to evaluate the feasibility of using an aTPS for the treatment of HS WBRT and see if the RTOG 0933 dose constraints could be achieved and improved. Ten consecutive patients treated with HS WBRT were enrolled in this study. 10 × 3 Gy was prescribed according to the RTOG 0933 protocol to 92% of the target volume (whole-brain excluding the hippocampus expanded by 5 mm in 3-dimensions). In contrast to RTOG 0933, the maximum allowed point dose to normal brain was significantly lowered and restricted to 36.5 Gy. All patients were planned with volumetric modulated arc therapy (VMAT) technique using four arcs. Plans were optimized using Auto-Planning (AP) (Philips Radiation Oncology Systems) with one single AP template and optimization. All the constraints from the RTOG 0933 trial were achieved. A significant improvement for the maximal dose to 2% of the brain with a reduction of 4 Gy was achieved (33.5 Gy vs. RTOG 37.5 Gy) and the minimum hippocampus dose was reduced by 10% (8.1 Gy vs. RTOG 9 Gy). A steep dose gradient around the hippocampus was achieved with a mean dose of 27.3 Gy at a distance between 0.5 cm and 1 cm from the hippocampus. The effective working time to optimize a plan was kept below 6'. Automated treatment planning for HS WBRT was able to fulfil all the recommendations from the RTOG 0933 study while significantly improving dose homogeneity and decreasing unnecessary hot spot in the normal brain. With this approach, a standardization of plan quality was achieved and the effective

  3. PERANCANGAN DATA WAREHOUSE UNTUK MENDUKUNG PERENCANAAN PEMASARAN PERGURUAN TINGGI

    Directory of Open Access Journals (Sweden)

    agung prasetyo

    2017-02-01

    Full Text Available Salah satu indikasi perguruan tinggi yang besar adalah dilihat dari jumlah mahasiswa di perguruan tinggi tersebut. Karenanya, mahasiswa baru merupakan salah satu sumber daya yang menentukan berjalannya sebuah perguruan tinggi. Setiap tahunnya STMIK AMIKOM Purwokerto selalu melakukan penerimaan calon mahasiswa. Data mahasiswa baru tersebut sangat berguna bagi bagian pemasaran sebagai informasi untuk evaluasi kegiatan pemasaran berikutnya. Dengan dibangunnya data warehouse dan aplikasi OLAP dengan menggunakan aplikasi Pentaho Data Integration/Kettle sebagai perangkat ETL dan Pentaho Workbench yang merupakan Online Analytical Processing (OLAP sebagai pengolah database, manajemen di STMIK AMIKOM Purwokerto bisa mengambil beberapa informasi misalnya; banyak jumlah pendaftar per-periode/gelombang, per/asal sekolahnya, per/asal sumber informasi yang diperoleh calon mahasiswa baru, serta tren minat terhadap jurusan yang dipilih oleh calon mahasiswa baru. Data warehouse mampu menganalisis data transaksi, mampu memberikan laporan yang dinamis dan mampu memberikan informasi dalam berbagai dimensi tentang penerimaan mahasiswa baru di STMIK AMIKOM Purwokerto.Kata Kunci: Data Warehouse, OLAP, Pentaho, Penerimaan Calon Mahasiswa.

  4. Unexpected levels and movement of radon in a large warehouse

    International Nuclear Information System (INIS)

    Gammage, R.B.; Espinosa, G.

    2004-01-01

    Alpha-track detectors, used in screening for radon, identified a large warehouse with levels of radon as high as 20 p Ci/l. This circumstance was unexpected because large bay doors were left open for much of the day to admit 1 8-wheeler trucks, and exhaust fans in the roof produced good ventilation. More detailed temporal and spatial investigations of radon and air-flow patterns were made with electret chambers, Lucas-cell flow chambers, tracer gas, smoke pencils and pressure sensing micrometers. An oval-dome shaped zone of radon (>4 p Ci/L) persisted in the central region of each of four separate bays composing the warehouse. Detailed studies of air movement in the bay with the highest levels of radon showed clockwise rotation of air near the outer walls with a central dead zone. Sub slab, radon-laden air ingresses the building through expansion joints between the floor slabs to produce the measured radon. The likely source of radon is air within porous, karst bedrock that underlies much of north-central Tennessee where the warehouse is situated

  5. Warehouse hazardous and toxic waste design in Karingau Balikpapan

    Science.gov (United States)

    Pratama, Bayu Rendy; Kencanawati, Martheana

    2017-11-01

    PT. Balikpapan Environmental Services (PT. BES) is company that having core business in Hazardous and Toxic Waste Management Services which consisting storage and transporter at Balikpapan. This research starting with data collection such as type of waste, quantity of waste, dimension area of existing building, waste packaging (Drum, IBC tank, Wooden Box, & Bulk Bag). Processing data that will be done are redesign for warehouse dimension and layout of position waste, specify of capacity, specify of quantity, type and detector placement, specify of quantity, type and fire extinguishers position which refers to Bapedal Regulation No. 01 In 1995, SNI 03-3985-2000, Employee Minister Regulation RI No. Per-04/Men/1980. Based on research that already done, founded the design for warehouse dimension of waste is 23 m × 22 m × 5 m with waste layout position appropriate with type of waste. The necessary of quantity for detector on this waste warehouse design are 56 each. The type of fire extinguisher that appropriate with this design is dry powder which containing natrium carbonate, alkali salts, with having each weight of 12 Kg about 18 units.

  6. THE DEVELOPMENT OF THE APPLICATION OF A DATA WAREHOUSE AT PT JKL

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2012-05-01

    Full Text Available One rapidly evolving technology today is information technology, which can help decision-making in an organization or a company. The data warehouse is one form of information technology that supports those needs, as one of the right solutions for companies in decision-making. The objective of this research is the development of a data warehouse at PT JKL in order to support executives in analyzing the organization and support the decision-making process. Methodology of this research is conducting interview with related units, literature study and document examination. This research also used the Nine Step Methodology developed by Kimball to design the data warehouse. The results obtained is an application that can summarize the data warehouse, integrating and presenting historical data in multidimensional. The conclusion from this research is the data warehouse can help companies to analyze data in a flexible, fast, and effective data access.Keywords: Data Warehouse; Inventory; Contract Approval; Inventory; Dashboard

  7. Improving Dialysis Adherence for High Risk Patients Using Automated Messaging: Proof of Concept

    OpenAIRE

    Som, A.; Groenendyk, J.; An, T.; Patel, K.; Peters, R.; Polites, G.; Ross, W. R.

    2017-01-01

    Comorbidities and socioeconomic barriers often limit patient adherence and self-management with hemodialysis. Missed sessions, often associated with communication barriers, can result in emergency dialysis and avoidable hospitalizations. This proof of concept study explored using a novel digital-messaging platform, EpxDialysis, to improve patient-to-dialysis center communication via widely available text messaging and telephone technology. A randomized controlled trial was conducted through W...

  8. Improved automation of dissolved organic carbon sampling for organic-rich surface waters.

    Science.gov (United States)

    Grayson, Richard P; Holden, Joseph

    2016-02-01

    In-situ UV-Vis spectrophotometers offer the potential for improved estimates of dissolved organic carbon (DOC) fluxes for organic-rich systems such as peatlands because they are able to sample and log DOC proxies automatically through time at low cost. In turn, this could enable improved total carbon budget estimates for peatlands. The ability of such instruments to accurately measure DOC depends on a number of factors, not least of which is how absorbance measurements relate to DOC and the environmental conditions. Here we test the ability of a S::can Spectro::lyser™ for measuring DOC in peatland streams with routinely high DOC concentrations. Through analysis of the spectral response data collected by the instrument we have been able to accurately measure DOC up to 66 mg L(-1), which is more than double the original upper calibration limit for this particular instrument. A linear regression modelling approach resulted in an accuracy >95%. The greatest accuracy was achieved when absorbance values for several different wavelengths were used at the same time in the model. However, an accuracy >90% was achieved using absorbance values for a single wavelength to predict DOC concentration. Our calculations indicated that, for organic-rich systems, in-situ measurement with a scanning spectrophotometer can improve fluvial DOC flux estimates by 6 to 8% compared with traditional sampling methods. Thus, our techniques pave the way for improved long-term carbon budget calculations from organic-rich systems such as peatlands. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes

    OpenAIRE

    Koziolek, Anne

    2013-01-01

    Quality attributes, such as performance or reliability, are crucial for the success of a software system and largely influenced by the software architecture. Their quantitative prediction supports systematic, goal-oriented software design and forms a base of an engineering approach to software design. This thesis proposes a method and tool to automatically improve component-based software architecture (CBA) models based on such quantitative quality prediction techniques.

  10. Investigation and survey of occasions when humans saved and improved a situation where the automation was insufficient or failed

    International Nuclear Information System (INIS)

    Lackman, Tomas

    2011-01-01

    In many systems an increased level of automation implies an altered role for the human. Behind the introduction of new automation lies different automation philosophies which stretches from trying to use as much automation as possible to adding automation only as a support to human tasks in specific situations. The Swedish Radiation Safety Authority has assigned AaF-engineering to describe the current automation philosophies within the nuclear industry. The assignment also includes a survey of events in which human involvement was necessary in order to save a situation in which the automation has not been sufficient. The survey also include an analysis of these events focusing on which automation philosophy/level of automation is most appropriate for obtaining a high level of safety. In the report three events are described in which human involvement has been crucial for the successful outcome of the situation; Spain 1989, Switzerland 1996, Forsmark/Sweden 2006. These events shows that the human is one of the most vital parts of the defense in depth, hence a strong focus should be given to looking after and maintaining human abilities in order for her to be able to act safely in such situations. The events also show potential enhancement of the defense in depth through making the most of the unique human abilities of intuitive and creative thinking and acting without access to external sources of power or prearranged procedures. These abilities are affected by the levels of automation, e.g. a too high level of automation can lead to a lack in situation awareness whilst a too low level can lead to too high levels of mental workload for the operators. To avoid degradation in human abilities to safely intervene, changes in automation levels at nuclear power plants should always be preceded by an analysis of its effect on the human in the situation at hand. In order better determine the efficiency of existing methods for assessing the effects of automation on human

  11. Improved automated production of 18F-FMISO and its tumor hypoxia imaging by Micro-PET/CT

    International Nuclear Information System (INIS)

    Wang Mingwei; Zhang Yongping; Zheng Yujia; Bao Xiao; Zheng Yingjian

    2013-01-01

    Background: 1-H-1-(3-[ 18 F]fluoro-2-hydroxypropyl)-2-nitroimidazole ( 18 F-FMISO) is a specific molecular imaging probe for tumor hypoxia imaging, and its PET/CT imaging has an important clinical value for planning cancer radiotherapy target volume. Purpose: This study aimed to develop an improved, automated production of 18 F-FMISO and to perform Micro-PET/CT imaging of tumor hypoxia. Methods: Based on the labeling precursor NITTP and a simple 'one-pot' method, an upgraded Explora GN module together with Explora LC was adopted to run radiofluorination (NITTP (10 mg), MeCN (1.0 mL), 120℃, 5.0 min), hydrolysis (HCI (1.0 mol/L, 1.0 mL), 130℃, 8.0 min) and high performance liquid chromatography (HPLC) purification to produce 18 F-FMISO automatically. Moreover, Radio-HPLC and Radio-TLC were applied for the quality control, and Micro-PET/CT scanner for hypoxia imaging of SW1990 pancreatic tumor-bearing mice. Results: As results, 18 F-FMISO was obtained with the synthesis time for about 65 min, the radiochemical yield of (30±5.0)% (no decay corrected, n=20), the radiochemical purity of above 99%, the specific activity of (2.04±0.17)x10 11 Bq·μmol -1 , plus with the enhanced chemical purity. Moreover, MicroPET/CT imaging showed that 18 F-FMISO presented whole-body distribution in SW1990 tumor-bearing mice, and the optimized time point for tumor hypoxia imaging was 3 h post injection with the uptake ratios of tumor-to-muscle of 3.00±0.08. Conclusion: In sum, we developed an improved, automated production of 18 F-FMISO with high performance liquid chromatography purification, high radiochemical yield, high specific activity and high reliability , and also verified its MicroPET/CT imaging of tumor hypoxia for providing experimental reference data. (authors)

  12. 7 CFR 735.401 - Electronic warehouse receipt and USWA electronic document providers.

    Science.gov (United States)

    2010-01-01

    ... audit level financial statement prepared according to generally accepted accounting standards as defined... warehouse receipt requirements; (3) Liability; (4) Transfer of records protocol; (5) Records; (6) Conflict...

  13. Two warehouse inventory model for deteriorating item with exponential demand rate and permissible delay in payment

    Directory of Open Access Journals (Sweden)

    Kaliraman Naresh Kumar

    2017-01-01

    Full Text Available A two warehouse inventory model for deteriorating items is considered with exponential demand rate and permissible delay in payment. Shortage is not allowed and deterioration rate is constant. In the model, one warehouse is rented and the other is owned. The rented warehouse is provided with better facility for the stock than the owned warehouse, but is charged more. The objective of this model is to find the best replenishment policies for minimizing the total appropriate inventory cost. A numerical illustration and sensitivity analysis is provided.

  14. Deteksi Outlier Transaksi Menggunakan Visualisasi-Olap Pada Data Warehouse Perguruan Tinggi Swasta

    Directory of Open Access Journals (Sweden)

    Gusti Ngurah Mega Nata

    2016-07-01

    Full Text Available Mendeteksi outlier pada data warehouse merupakan hal penting. Data pada data warehouse sudah diagregasi dan memiliki model multidimensional. Agregasi pada data warehouse dilakukan karena data warehouse digunakan untuk menganalisis data secara cepat pada top level manajemen. Sedangkan, model data multidimensional digunakan untuk melihat data dari berbagai dimensi objek bisnis. Jadi, Mendeteksi outlier pada data warehouse membutuhkan teknik yang dapat melihat outlier pada data yang sudah diagregasi dan dapat melihat dari berbagai dimensi objek bisnis. Mendeteksi outlier pada data warehouse akan menjadi tantangan baru.        Di lain hal, Visualisasi On-line Analytic process (OLAP merupakan tugas penting dalam menyajikan informasi trend (report pada data warehouse dalam bentuk visualisasi data. Pada penelitian ini, visualisasi OLAP digunakan untuk deteksi outlier transaksi. Maka, dalam penelitian ini melakukan analisis untuk mendeteksi outlier menggunakan visualisasi-OLAP. Operasi OLAP yang digunakan yaitu operasi drill-down. Jenis visualisasi yang akan digunakan yaitu visualisasi satu dimensi, dua dimensi dan multi dimensi menggunakan tool weave desktop. Pembangunan data warehouse dilakukan secara button-up. Studi kasus dilakukan pada perguruan tinggi swasta. Kasus yang diselesaikan yaitu mendeteksi outlier transaki pembayaran mahasiswa pada setiap semester. Deteksi outlier pada visualisasi data menggunakan satu tabel dimensional lebih mudah dianalisis dari pada deteksi outlier pada visualisasi data menggunakan dua atau multi tabel dimensional. Dengan kata lain semakin banyak tabel dimensi yang terlibat semakin sulit analisis deteksi outlier yang dilakukan. Kata kunci — Deteksi Outlier,  Visualisasi OLAP, Data warehouse

  15. Importance of relationship quality in the success of data warehouse systems

    Science.gov (United States)

    Almabhouh, Alaaeddin; Saleh, Abdul; Ahmad, Azizah

    2011-10-01

    Increased organizational dependence on data warehouse (DW) systems drives management attention towards improving DW systems success. However, the successful implementation rate of DW systems is low and many firms did not achieve intended goals. A recent studies show that improves and evaluates DW success is one of the top concerns facing IT/DW executives. Existing information system (IS) research has studied DW success more from information quality and system quality. Researchers argue in this study that we should also take the relationship quality, which has significant research and practical implications in that it connects to IS success directly. As our first attempt, this study, referring to both IS and marketing literature, examines how communication, coordination, cooperation, commitment, and trust can be achieved to some degrees by high quality relationships between DW parties.

  16. Use of automated rendezvous trajectory planning to improve spacecraft operations efficiency

    Science.gov (United States)

    Mulder, Tom A.

    1991-01-01

    The current planning process for space shuttle rendezvous with a second Earth-orbiting vehicle is time consuming and costly. It is a labor-intensive, manual process performed pre-mission with the aid of specialized maneuver processing tools. Real-time execution of a rendezvous plan must closely follow a predicted trajectory, and targeted solutions leading up to the terminal phase are computed on the ground. Despite over 25 years of Gemini, Apollo, Skylab, and shuttle vehicle-to-vehicle rendezvous missions flown to date, rendezvous in Earth orbit still requires careful monitoring and cannot be taken for granted. For example, a significant trajectory offset was experienced during terminal phase rendezvous of the STS-32 Long Duration Exposure Facility retrieval mission. Several improvements can be introduced to the present rendezvous planning process to reduce costs, produce more fuel-efficient profiles, and increase the probability of mission success.

  17. Automation of CT-based haemorrhagic stroke assessment for improved clinical outcomes: study protocol and design.

    Science.gov (United States)

    Chinda, Betty; Medvedev, George; Siu, William; Ester, Martin; Arab, Ali; Gu, Tao; Moreno, Sylvain; D'Arcy, Ryan C N; Song, Xiaowei

    2018-04-19

    scientists, computing scientists and clinical professionals in neurology and neuroradiology and includes patient representatives. Research outputs will be disseminated following knowledge translation plans towards improving stroke patient care. Significant findings will be published in scientific journals. Anticipated deliverables include computer solutions for improved clinical assessment of haematoma using NCCT. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Improving Radiology Workflow with Automated Examination Tracking and Alerts.

    Science.gov (United States)

    Pianykh, Oleg S; Jaworsky, Christina; Shore, M T; Rosenthal, Daniel I

    2017-07-01

    The modern radiology workflow is a production line where imaging examinations pass in sequence through many steps. In busy clinical environments, even a minor delay in any step can propagate through the system and significantly lengthen the examination process. This is particularly true for the tasks delegated to the human operators, who may be distracted or stressed. We have developed an application to track examinations through a critical part of the workflow, from the image-acquisition scanners to the PACS archive. Our application identifies outliers and actively alerts radiology managers about the need to resolve these problems as soon as they happen. In this study, we investigate how this real-time tracking and alerting affected the speed of examination delivery to the radiologist. We demonstrate that active alerting produced a 3-fold reduction of examination-to-PACS delays. Additionally, we discover an overall improvement in examination-to-PACS delivery, evidence that the tracking and alerts instill a culture where timely processing is essential. By providing supervisors with information about exactly where delays emerge in their workflow and alerting the correct staff to take action, applications like ours create more robust radiology workflow with predictable, timely outcomes. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. An automated design and fabrication pipeline for improving the strength of 3D printed artifacts under tensile loading

    Science.gov (United States)

    Al, Can Mert; Yaman, Ulas

    2018-05-01

    In the scope of this study, an alternative automated method to the conventional design and fabrication pipeline of 3D printers is developed by using an integrated CAD/CAE/CAM approach. It increases the load carrying capacity of the parts by constructing heterogeneous infill structures. Traditional CAM software of Additive Manufacturing machinery starts with a design model in STL file format which only includes data about the outer boundary in the triangular mesh form. Depending on the given infill percentage, the algorithm running behind constructs the interior of the artifact by using homogeneous infill structures. As opposed to the current CAM software, the proposed method provides a way to construct heterogeneous infill structures with respect to the Von Misses stress field results obtained from a finite element analysis. Throughout the work, Rhinoceros3D is used for the design of the parts along with Grasshopper3D, an algorithmic design tool for Rhinoceros3D. In addition, finite element analyses are performed using Karamba3D, a plug-in for Grasshopper3D. According to the results of the tensile tests, the method offers an improvement of load carrying capacity about 50% compared to traditional slicing algorithms of 3D printing.

  20. Improving configuration management of thermalhydraulic analysis by automating the linkage between pipe geometry and plant idealization

    International Nuclear Information System (INIS)

    Gibb, R.; Girard, R.; Thompson, W.

    1997-01-01

    All safety analysis codes require some representation of actual plant data as a part of their input. Such representations, referred to at Point Lepreau Generating Station (PLGS) as plant idealizations, may include piping layout, orifice, pump or valve opening characteristics, boundary conditions of various sorts, reactor physics parameters, etc. As computing power increases, the numerical capabilities of thermalhydraulic analysis tools become more sophisticated, requiring more detailed assessments, and consequently more complex and complicated idealizations of the system models. Thus, a need has emerged to create a precise plant model layout in electronic form which ensures a realistic representation of the plant systems, and form which analytical approximations of any chosen degree of accuracy may be created. The benefits of this process are twofold. Firstly, the job of developing a plant idealization is made simpler, and therefore is cheaper for the utility. More important however, are the improvements in documentation and reproducibility that this process imparts to the resultant idealization. Just as the software that performs the numerical operations on the input data must be subject to verification/validation, equally robust measures must be taken to ensure that these software operations are being applied to valid idealizations, that are formally documented. Since the CATHENA Code is one of the most important thermalhydraulic code used for safety analysis at PLGS the main effort was directed towards the systems plant models for this code. This paper reports the results of the work carried on at PLGS and ANSL to link the existing piping data base to the actual CATHENA plant idealization. An introduction to the concept is given first, followed by a description of the databases, and the supervisory tool which manages the data, and associated software. An intermediate code, which applied some thermalhydraulic rules to the data, and translated the resultant data

  1. Multi-modal and targeted imaging improves automated mid-brain segmentation

    Science.gov (United States)

    Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.

    2017-02-01

    The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).

  2. An HL7-FHIR-based Object Model for a Home-Centered Data Warehouse for Ambient Assisted Living Environments.

    Science.gov (United States)

    Schwartze, Jonas; Jansen, Lars; Schrom, Harald; Wolf, Klaus-Hendrik; Haux, Reinhold; Marschollek, Michael

    2015-01-01

    Current AAL environments focus on assisting a single person with seperated technologies. There is no interoperability between sub-domains in home environments, like building energy management or housing industry services. BASIS (Building Automation by a Scalable and Intelligent System) aims to integrate all sensors and actuators into a single, efficient home bus. First step is to create a semtically enriched data warehouse object model. We choose FHIR and built an object model mainly based on the Observation, Device and Location resources with minor extensions needed by AAL-foreign sub domains. FHIR turned out to be very flexible and complete for other home related sub-domains. The object model is implemented in a separated software-partition storing all structural and procedural data of BASIS.

  3. An improved automated procedure for informal and temporary dwellings detection and enumeration, using mathematical morphology operators on VHR satellite data

    Science.gov (United States)

    Jenerowicz, Małgorzata; Kemper, Thomas

    2016-10-01

    Every year thousands of people are displaced by conflicts or natural disasters and often gather in large camps. Knowing how many people have been gathered is crucial for an efficient relief operation. However, it is often difficult to collect exact information on the total number of the population. This paper presents the improved morphological methodology for the estimation of dwellings structures located in several Internally Displaced Persons (IDPs) Camps, based on Very High Resolution (VHR) multispectral satellite imagery with pixel sizes of 1 meter or less including GeoEye-1, WorldView-2, QuickBird-2, Ikonos-2, Pléiades-A and Pléiades-B. The main topic of this paper is the approach enhancement with selection of feature extraction algorithm, the improvement and automation of pre-processing and results verification. For the informal and temporary dwellings extraction purpose the high quality of data has to be ensured. The pre-processing has been extended by including the input data hierarchy level assignment and data fusion method selection and evaluation. The feature extraction algorithm follows the procedure presented in Jenerowicz, M., Kemper, T., 2011. Optical data are analysed in a cyclic approach comprising image segmentation, geometrical, textural and spectral class modeling aiming at camp area identification. The successive steps of morphological processing have been combined in a one stand-alone application for automatic dwellings detection and enumeration. Actively implemented, these approaches can provide a reliable and consistent results, independent of the imaging satellite type and different study sites location, providing decision support in emergency response for the humanitarian community like United Nations, European Union and Non-Governmental relief organizations.

  4. Automated and connected vehicle (AV/CV) test bed to improve transit, bicycle, and pedestrian safety : technical report.

    Science.gov (United States)

    2017-02-01

    Crashes involving transit vehicles, bicyclists, and pedestrians are a concern in Texas, especially in urban areas. This research explored the potential of automated and connected vehicle (AV/CV) technology to reduce or eliminate these crashes. The pr...

  5. TECHNICAL AND ENERGY PARAMETERS IMPROVEMENT OF DIESEL LOCOMOTIVES THROUGH THE INTRODUCTION OF AUTOMATED CONTROL SYSTEMS OF A DIESEL

    Directory of Open Access Journals (Sweden)

    M. I. Kapitsa

    2015-04-01

    Full Text Available Purpose. Today the issue, connected with diesel traction remains relevant for the majority of industrial enterprises and Ukrainian railways and diesel engine continues to be the subject of extensive research and improvements. Despite the intensive process of electrification, which accompanies Railway Transport of Ukraine the last few years, diesel traction continues to play an important role both in the main and in the industrial railway traction rolling stock. Anyway, all kinds of maneuvering and chores are for locomotives, they are improved and upgraded relentlessly and hourly. This paper is focused on finding the opportunities to improve technical and energy parameters of diesels due to the development of modern control method of the fuel equipment in the diesel engine. Methodology. The proposed method increases the power of locomotives diesel engines in the range of crankshaft rotation (from idle running to maximum one. It was based on approach of mixture ignition timing up to the top «dead» center of piston position. Findings. The paper provides a brief historical background of research in the area of operating cycle in the internal combustion engine (ICE. The factors affecting the process of mixing and its quality were analyzed. The requirements for fuel feed system in to the cylinder and the «weak points» of the process were presented. A variant of the modification the fuel pump drive, which allows approaching to the regulation of fuel feed system from the other hand and to improve it was proposed. Represents a variant of embodiment of the complex system with specification of mechanical features and control circuits. The algorithm of the system operation was presented and its impact on the performance of diesel was made. Originality. The angle regulating system of fuel supply allows automating the process of fuel injection advance angle into the cylinder. Practical value. At implementation the angle regulating system of fuel supply

  6. ON PROBLEM OF REGIONAL WAREHOUSE AND TRANSPORT INFRASTRUCTURE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    I. Yu. Miretskiy

    2017-01-01

    Full Text Available The article suggests an approach of solving the problem of warehouse and transport infrastructure optimization in a region. The task is to determine the optimal capacity and location of the support network of warehouses in the region, as well as power, composition and location of motor fleets. Optimization is carried out using mathematical models of a regional warehouse network and a network of motor fleets. These models are presented as mathematical programming problems with separable functions. The process of finding the optimal solution of problems is complicated due to high dimensionality, non-linearity of functions, and the fact that a part of variables are constrained to integer, and some variables can take values only from a discrete set. Given the mentioned above complications search for an exact solution was rejected. The article suggests an approximate approach to solving problems. This approach employs effective computational schemes for solving multidimensional optimization problems. We use the continuous relaxation of the original problem to obtain its approximate solution. An approximately optimal solution of continuous relaxation is taken as an approximate solution of the original problem. The suggested solution method implies linearization of the obtained continuous relaxation and use of the separable programming scheme and the scheme of branches and bounds. We describe the use of the simplex method for solving the linearized continuous relaxation of the original problem and the specific moments of the branches and bounds method implementation. The paper shows the finiteness of the algorithm and recommends how to accelerate process of finding a solution.

  7. Data warehouse based decision support system in nuclear power plants

    International Nuclear Information System (INIS)

    Nadinic, B.

    2004-01-01

    Safety is an important element in business decision making processes in nuclear power plants. Information about component reliability, structures and systems, data recorded during the nuclear power plant's operation and outage periods, as well as experiences from other power plants are located in different database systems throughout the power plant. It would be possible to create a decision support system which would collect data, transform it into a standardized form and store it in a single location in a format more suitable for analyses and knowledge discovery. This single location where the data would be stored would be a data warehouse. Such data warehouse based decision support system could help make decision making processes more efficient by providing more information about business processes and predicting possible consequences of different decisions. Two main functionalities in this decision support system would be an OLAP (On Line Analytical Processing) and a data mining system. An OLAP system would enable the users to perform fast, simple and efficient multidimensional analysis of existing data and identify trends. Data mining techniques and algorithms would help discover new, previously unknown information from the data as well as hidden dependencies between various parameters. Data mining would also enable analysts to create relevant prediction models that could predict behaviour of different systems during operation and inspection results during outages. The basic characteristics and theoretical foundations of such decision support system are described and the reasons for choosing a data warehouse as the underlying structure are explained. The article analyzes obvious business benefits of such system as well as potential uses of OLAP and data mining technologies. Possible implementation methodologies and problems that may arise, especially in the field of data integration, are discussed and analyzed.(author)

  8. Benefits of the implementation and use of a warehouse management system in a distribution center

    Directory of Open Access Journals (Sweden)

    Alexsander Machado

    2011-12-01

    Full Text Available The aim of this article was to describe how the deployment and use of a Warehouse Management System (WMS can help increase productivity, reduce errors and speed up the flow of information in a distribution center. The research method was the case study. We had chosen a distributor of goods, located in Vale do Rio dos Sinos, RS, which sells and distributes for companies throughout Brazil products for business use. The main research technique was participant observation. In order to highlight the observed results, we collected two indicators, productivity and errors for the separation of items into applications. After four months of observation, both showed significant improvement, strengthening the hypothesis that selection and implementation of management system was beneficial for the company.

  9. The Veterans Affairs's Corporate Data Warehouse: Uses and Implications for Nursing Research and Practice.

    Science.gov (United States)

    Price, Lauren E; Shea, Kimberly; Gephart, Sheila

    2015-01-01

    The Department of Veterans Affairs Veterans Healthcare Administration (VHA) is supported by one of the largest integrated health care information systems in the United States. The VHA's Corporate Data Warehouse (CDW) was developed in 2006 to accommodate the massive amounts of data being generated from more than 20 years of use and to streamline the process of knowledge discovery to application. This article describes the developments in research associated with the VHA's transition into the world of Big Data analytics through CDW utilization. The majority of studies utilizing the CDW also use at least one other data source. The most commonly occurring topics are pharmacy/medications, systems issues, and weight management/obesity. Despite the potential benefit of data mining techniques to improve patient care and services, the CDW and alternative analytical approaches are underutilized by researchers and clinicians.

  10. Event-Entity-Relationship Modeling in Data Warehouse Environments

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    We use the event-entity-relationship model (EVER) to illustrate the use of entity-based modeling languages for conceptual schema design in data warehouse environments. EVER is a general-purpose information modeling language that supports the specification of both general schema structures and multi......-dimensional schemes that are customized to serve specific information needs. EVER is based on an event concept that is very well suited for multi-dimensional modeling because measurement data often represent events in multi-dimensional databases...

  11. A Framework for a Clinical Reasoning Knowledge Warehouse

    DEFF Research Database (Denmark)

    Vilstrup Pedersen, Klaus; Boye, Niels

    2004-01-01

    In many areas of the medical domain, the decision process i.e. reasoning, involving health care professionals is distributed, cooperative and complex. This paper presents a framework for a Clinical Reasoning Knowledge Warehouse that combines theories and models from Artificial Intelligence...... is stored and made accessible when relevant to the reasoning context and the specific patient case. Furthermore, the information structure supports the creation of new generalized knowledge using data mining tools. The patient case is divided into an observation level and an opinion level. At the opinion...

  12. Data warehouse de soporte a datos de GSA

    OpenAIRE

    Arribas López, Iván

    2008-01-01

    El presente documento describe los procesos de extracción, transformación y carga de logs de GSA en un data warehouse. GSA (Google Search Appliance) es una aplicacion de Google que utiliza su gestor de consultas para buscar información en la información indexada de un determinado sitio web. Esta aplicación consecuentemente guarda un log de consultas de usuario a ese sitio web en formato estándar CLF modificado. Analizar este log le permitiría conocer al promotor del sitio en cuestión la infor...

  13. Population dynamics of stored maize insect pests in warehouses in two districts of Ghana

    Science.gov (United States)

    Understanding what insect species are present and their temporal and spatial patterns of distribution is important for developing a successful integrated pest management strategy for food storage in warehouses. Maize in many countries in Africa is stored in bags in warehouses, but little monitoring ...

  14. 7 CFR 1427.16 - Movement and protection of warehouse-stored cotton.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Movement and protection of warehouse-stored cotton. 1427.16 Section 1427.16 Agriculture Regulations of the Department of Agriculture (Continued) COMMODITY... Cotton Loan and Loan Deficiency Payments § 1427.16 Movement and protection of warehouse-stored cotton. (a...

  15. Protocol for a national blood transfusion data warehouse from donor to recipient

    NARCIS (Netherlands)

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-01-01

    INTRODUCTION: Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion

  16. 19 CFR 19.14 - Materials for use in manufacturing warehouse.

    Science.gov (United States)

    2010-04-01

    ... warehouse is located under an immediate transportation without appraisement entry or warehouse withdrawal for transportation, whichever is applicable. (b) Bond required. Before the transfer of the merchandise... the manufacture of articles as authorized by law. Port Director (d) Domestic spirits and wines. For...

  17. 27 CFR 28.244a - Shipment to a customs bonded warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Shipment to a customs... Export Consignment § 28.244a Shipment to a customs bonded warehouse. Distilled spirits and wine withdrawn for shipment to a customs bonded warehouse shall be consigned in care of the customs officer in charge...

  18. 27 CFR 28.27 - Entry of wine into customs bonded warehouses.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Entry of wine into customs... TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS EXPORTATION OF ALCOHOL Miscellaneous Provisions Customs Bonded Warehouses § 28.27 Entry of wine into customs bonded warehouses. Upon filing of the application or...

  19. The design and application of data warehouse during modern enterprises environment

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Chunying

    2006-04-01

    The interest in analyzing data has grown tremendously in recent years. To analyze data, a multitude of technologies is need, namely technologies from the fields of Data Warehouse, Data Mining, On-line Analytical Processing (OLAP). This paper proposes the system structure model of the data warehouse during modern enterprises environment according to the information demand for enterprises and the actual demand of user's, and also analyses the benefit of this kind of model in practical application, and provides the setting-up course of the data warehouse model. At the same time it has proposes the total design plans of the data warehouses of modern enterprises. The data warehouse that we build in practical application can be offered: high performance of queries; efficiency of the data; independent characteristic of logical and physical data. In addition, A Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support, OLAP queries or data mining. One of the most important decisions in designing a data warehouse is selection of right views to be materialized. In this paper, we also have designed algorithms for selecting a set of views to be materialized in a data warehouse.First, we give the algorithms for selecting materialized views. Then we use experiments do demonstrate the power of our approach. The results show the proposed algorithm delivers an optimal solution. Finally, we discuss the advantage and shortcoming of our approach and future work.

  20. 78 FR 65300 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-10-31

    ... (NOA) for General Purpose Warehouse and Information Technology Center Construction (GPW/IT)--Tracy Site... proposed action to construct a General Purpose Warehouse and Information Technology Center at Defense..., Suite 02G09, Alexandria, VA 22350- 3100. FOR FURTHER INFORMATION CONTACT: Ann Engelberger at (703) 767...

  1. 27 CFR 24.126 - Change in proprietorship involving a bonded wine warehouse.

    Science.gov (United States)

    2010-04-01

    ... involving a bonded wine warehouse. 24.126 Section 24.126 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS WINE Establishment and Operations Changes Subsequent to Original Establishment § 24.126 Change in proprietorship involving a bonded wine warehouse...

  2. THE ASSOCIATION BETWEEN OFFICE AUTOMATION AND IMPROVEMENT OF DECISION-MAKING AND PRODUCTIVITY OF EMPLOYEES OF YOUTH AND SPORT OFFICES OF WEST AZERBAIJAN PROVINCE, IRAN

    OpenAIRE

    Mostafa Mostafa pour; Ali Amini; Vadoud Shoshtary; Auoub Izadi; Yousef Esmayilian; Fatemeh Salami; Bager Khakpour

    2017-01-01

    The availability of precise, relevant, timely and new information increases the speed and precision of decision making. The objective of present study is to examine the association between office automation, improvement of decision-making and productivity of employees of Youth and Sport offices of West Azerbaijan Province. The statistical population of present study consists of 130 employees of Youth and Sport offices of West Azerbaijan Province selected through simple random sampling. The st...

  3. Development of a clinical data warehouse from an intensive care clinical information system.

    Science.gov (United States)

    de Mul, Marleen; Alons, Peter; van der Velde, Peter; Konings, Ilse; Bakker, Jan; Hazelzet, Jan

    2012-01-01

    There are relatively few institutions that have developed clinical data warehouses, containing patient data from the point of care. Because of the various care practices, data types and definitions, and the perceived incompleteness of clinical information systems, the development of a clinical data warehouse is a challenge. In order to deal with managerial and clinical information needs, as well as educational and research aims that are important in the setting of a university hospital, Erasmus Medical Center Rotterdam, The Netherlands, developed a data warehouse incrementally. In this paper we report on the in-house development of an integral part of the data warehouse specifically for the intensive care units (ICU-DWH). It was modeled using Atos Origin Metadata Frame method. The paper describes the methodology, the development process and the content of the ICU-DWH, and discusses the need for (clinical) data warehouses in intensive care. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Efficient data management tools for the heterogeneous big data warehouse

    Science.gov (United States)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  5. Characteristics desired in clinical data warehouse for biomedical research.

    Science.gov (United States)

    Shin, Soo-Yong; Kim, Woo Sung; Lee, Jae-Ho

    2014-04-01

    Due to the unique characteristics of clinical data, clinical data warehouses (CDWs) have not been successful so far. Specifically, the use of CDWs for biomedical research has been relatively unsuccessful thus far. The characteristics necessary for the successful implementation and operation of a CDW for biomedical research have not clearly defined yet. THREE EXAMPLES OF CDWS WERE REVIEWED: a multipurpose CDW in a hospital, a CDW for independent multi-institutional research, and a CDW for research use in an institution. After reviewing the three CDW examples, we propose some key characteristics needed in a CDW for biomedical research. A CDW for research should include an honest broker system and an Institutional Review Board approval interface to comply with governmental regulations. It should also include a simple query interface, an anonymized data review tool, and a data extraction tool. Also, it should be a biomedical research platform for data repository use as well as data analysis. The proposed characteristics desired in a CDW may have limited transfer value to organizations in other countries. However, these analysis results are still valid in Korea, and we have developed clinical research data warehouse based on these desiderata.

  6. Geminivirus data warehouse: a database enriched with machine learning approaches.

    Science.gov (United States)

    Silva, Jose Cleydson F; Carvalho, Thales F M; Basso, Marcos F; Deguchi, Michihito; Pereira, Welison A; Sobrinho, Roberto R; Vidigal, Pedro M P; Brustolini, Otávio J B; Silva, Fabyano F; Dal-Bianco, Maximiller; Fontes, Renildes L F; Santos, Anésia A; Zerbini, Francisco Murilo; Cerqueira, Fabio R; Fontes, Elizabeth P B

    2017-05-05

    The Geminiviridae family encompasses a group of single-stranded DNA viruses with twinned and quasi-isometric virions, which infect a wide range of dicotyledonous and monocotyledonous plants and are responsible for significant economic losses worldwide. Geminiviruses are divided into nine genera, according to their insect vector, host range, genome organization, and phylogeny reconstruction. Using rolling-circle amplification approaches along with high-throughput sequencing technologies, thousands of full-length geminivirus and satellite genome sequences were amplified and have become available in public databases. As a consequence, many important challenges have emerged, namely, how to classify, store, and analyze massive datasets as well as how to extract information or new knowledge. Data mining approaches, mainly supported by machine learning (ML) techniques, are a natural means for high-throughput data analysis in the context of genomics, transcriptomics, proteomics, and metabolomics. Here, we describe the development of a data warehouse enriched with ML approaches, designated geminivirus.org. We implemented search modules, bioinformatics tools, and ML methods to retrieve high precision information, demarcate species, and create classifiers for genera and open reading frames (ORFs) of geminivirus genomes. The use of data mining techniques such as ETL (Extract, Transform, Load) to feed our database, as well as algorithms based on machine learning for knowledge extraction, allowed us to obtain a database with quality data and suitable tools for bioinformatics analysis. The Geminivirus Data Warehouse (geminivirus.org) offers a simple and user-friendly environment for information retrieval and knowledge discovery related to geminiviruses.

  7. MitoMiner: a data warehouse for mitochondrial proteomics data.

    Science.gov (United States)

    Smith, Anthony C; Blackshaw, James A; Robinson, Alan J

    2012-01-01

    MitoMiner (http://mitominer.mrc-mbu.cam.ac.uk/) is a data warehouse for the storage and analysis of mitochondrial proteomics data gathered from publications of mass spectrometry and green fluorescent protein tagging studies. In MitoMiner, these data are integrated with data from UniProt, Gene Ontology, Online Mendelian Inheritance in Man, HomoloGene, Kyoto Encyclopaedia of Genes and Genomes and PubMed. The latest release of MitoMiner stores proteomics data sets from 46 studies covering 11 different species from eumetazoa, viridiplantae, fungi and protista. MitoMiner is implemented by using the open source InterMine data warehouse system, which provides a user interface allowing users to upload data for analysis, personal accounts to store queries and results and enables queries of any data in the data model. MitoMiner also provides lists of proteins for use in analyses, including the new MitoMiner mitochondrial proteome reference sets that specify proteins with substantial experimental evidence for mitochondrial localization. As further mitochondrial proteomics data sets from normal and diseased tissue are published, MitoMiner can be used to characterize the variability of the mitochondrial proteome between tissues and investigate how changes in the proteome may contribute to mitochondrial dysfunction and mitochondrial-associated diseases such as cancer, neurodegenerative diseases, obesity, diabetes, heart failure and the ageing process.

  8. Data Warehouse on the Web for Accelerator Fabrication And Maintenance

    International Nuclear Information System (INIS)

    Chan, A.; Crane, G.; Macgregor, I.; Meyer, S.

    2011-01-01

    A data warehouse grew out of the needs for a view of accelerator information from a lab-wide or project-wide standpoint (often needing off-site data access for the multi-lab PEP-II collaborators). A World Wide Web interface is used to link legacy database systems of the various labs and departments related to the PEP-II Accelerator. In this paper, we describe how links are made via the 'Formal Device Name' field(s) in the disparate databases. We also describe the functionality of a data warehouse in an accelerator environment. One can pick devices from the PEP-II Component List and find the actual components filling the functional slots, any calibration measurements, fabrication history, associated cables and modules, and operational maintenance records for the components. Information on inventory, drawings, publications, and purchasing history are also part of the PEP-II Database. A strategy of relying on a small team, and of linking existing databases rather than rebuilding systems is outlined.

  9. Analysing the effectiveness of vendor-managed inventory in a single-warehouse, multiple-retailer system

    Science.gov (United States)

    Rahim, Mohd Kamarul Irwan Abdul; Aghezzaf, El-Houssaine; Limère, Veronique; Raa, Birger

    2016-06-01

    This paper considers a two-stage supply chain, consisting of a single warehouse and multiple retailers facing deterministic demands, under a vendor-managed inventory (VMI) policy. It presents a two-phase optimisation approach for coordinating the shipments in this VMI system. The first phase uses direct shipping from the supplier to all retailers to minimise the overall inventory costs. Then, in the second phase, the retailers are clustered using a construction heuristic in order to optimise the transportation costs while satisfying some additional restrictions. The improvement of the system's performance through coordinated VMI replenishments against the system with direct shipping only is shown and discussed in the comparative analysis section.

  10. Future Trends in Process Automation

    OpenAIRE

    Jämsä-Jounela, Sirkka-Liisa

    2007-01-01

    The importance of automation in the process industries has increased dramatically in recent years. In the highly industrialized countries, process automation serves to enhance product quality, master the whole range of products, improve process safety and plant availability, efficiently utilize resources and lower emissions. In the rapidly developing countries, mass production is the main motivation for applying process automation. The greatest demand for process automation is in the chemical...

  11. Adaptive Automation Design and Implementation

    Science.gov (United States)

    2015-09-17

    with an automated system to a real-world adaptive au- tomation system implementation. There have been plenty of adaptive automation 17 Adaptive...of systems without increasing manpower requirements by allocating routine tasks to automated aids, improving safety through the use of au- tomated ...between intermediate levels of au- tomation , explicitly defining which human task a given level automates. Each model aids the creation and classification

  12. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  13. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  14. Automated intelligent rotor tine cultivation and punch planting to improve the selectivity of mechanical intra-row weed control

    DEFF Research Database (Denmark)

    Rasmussen, Jesper; Griepentrog, Hans W.; Nielsen, Jon

    2012-01-01

    in sugar beet and carrot crops showed no synergistic effects between plant establishment procedures and selectivity of post-emergence weed harrowing. Even if punch planting and automated intelligent rotor tine cultivation were not combined, the results indicated that there was no reason to believe...... that mainly work through soil burial....

  15. Information Centre and ``Cockpit`` of a modern electricity supply company: the data warehouse; Informationszentrale und ``Cockpit`` eines modernen EVU: das Data Warehouse

    Energy Technology Data Exchange (ETDEWEB)

    Kuebert, K. [Unterfraenkische Ueberlandzentrale (UUEZ), Luelsfeld (Germany); Gimenez, O. [Franken-Data GmbH, Erlangen (Germany)

    1999-06-14

    Electricity supply companies have a big advantage: They continuously measure consumer behaviour, thanks to network control systems, centralised multistations and remote meter reading. However, they do not combine this data on a consistent basis frequently enough to be able to draw valuable consequences for sales and marketing. One of the few exceptions in the Unterfraenkische Ueberlandzentrale, UUeZ (Lower Franconian Electricity Supply Company). At UUeZ, the meter readings from each individual load profile meter built into the plant of customers with special contracts, are fed together into a `cockpit`. Assisted by a Data-Warehouse-Solution from FrankenData GmbH, an ingenious information system that enables longterm trends and optimisation potential to be recognised immediately was implemented. The benefits for electricity suppliers: A constant load factor and reduction of peaks, improved customer contact as well as more precise information at negotiations and therefore more advantageous contracts. Customers also benefit from the new system: They receive more economical tariffs and detailed reports about their consumer behaviour, which they can then optimise by taking appropriate measures. (orig.) [Deutsch] Energieversorgungsunternehmen haben gegenueber anderen Unternehmen den Vorteil, dass sie das Verbraucherverhalten dank Netzleitsystemen, Roundsteuerung und neuerdings auch Zaehlerfernablesung kontinuierlich messen. Allerdings fuehren sie diese Daten meist nicht konsequent genug zusammen, um daraus wertvolle Schluesse fuer Vertrieb und Marketing zu ziehen. Eine der wenigen Ausnahmen ist die Unterfraenkische Ueberlandzentrale (UUeZ). Bei der UUeZ laufen die Zaehlwerte von jedem einzelnen Lastprofilzaehler, der in die Anlagen der Sondervertragskunden eingebaut ist, in einem `Cockpit` zusammen. Mit Hilfe einer Data-Warehouse-Loesung der FrankenData GmbH wurde ein ausgekluegeltes Informationssystem realisiert, das langfristige Trends und Optimierungspotentiale sofort

  16. Perancangan Data Warehouse Nilai Mahasiswa Dengan Kimball Nine-Step Methodology

    Directory of Open Access Journals (Sweden)

    Ganda Wijaya

    2017-04-01

    Abstract Student grades has many components that can be analyzed to support decision making. Based on this, the authors conducted a study of student grades. The study was conducted on a database that is in the Bureau of Academic and Student Affairs Administration Bina Sarana Informatika (BAAK BSI. The focus of this research is "How to model a data warehouse that can meet the management needs of the data value of students as supporters of evaluation, planning and decision making?". Data warehouse grades students need to be made in order to obtain the information, reports, and can perform multi-dimensional analysis, which in turn can assist management in making policy. Development of the system is done by using System Development Life Cycle (SDLC with Waterfall approach. While the design of the data warehouse using a nine-step methodology kimball. Results obtained in the form of a star schema and data warehouse value. Data warehouses can provide a summary of information that is fast, accurate and continuous so as to assist management in making policies for the future. In general, the benefits of this research are as additional reference in building a data warehouse using a nine-step methodology kimball.   Keywords: Data Warehouse, Kimball Nine-Step Methodology.

  17. PEMAHAMAN TEORI DATA WAREHOUSE BAGI MAHASISWA TAHUN AWAL JENJANG STRATA SATU BIDANG ILMU KOMPUTER

    Directory of Open Access Journals (Sweden)

    Harco Leslie Hendric Spits Warnars

    2015-01-01

    Full Text Available As a Computer scientist, a computer science students should have understanding about database theory as a concept of data maintenance. Database will be needed in every single human real life computer implementation such as information systems, information technology, internet, games, artificial intelligence, robot and so on. Inevitably, the right data handling and managament will produce excellent technology implementation. Data warehouse as one of the specialization subject which is offered in computer science study program final semester, provide challenge for computer science students.A survey was conducted on 18 students of early year of computer science study program at Surya university and giving hypothesis that for those students who ever heard of a data warehouse would be interested to learn data warehouse and on other hand, students who had never heard of the data warehouse will not be interested to learn data warehouse. Therefore, it is important that delivery of the Data warehouse subject material should be understood by lecturers, so that students can well understoodwith the data warehouse.

  18. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  19. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  20. Architectural design of a data warehouse to support operational and analytical queries across disparate clinical databases.

    Science.gov (United States)

    Chelico, John D; Wilcox, Adam; Wajngurt, David

    2007-10-11

    As the clinical data warehouse of the New York Presbyterian Hospital has evolved innovative methods of integrating new data sources and providing more effective and efficient data reporting and analysis need to be explored. We designed and implemented a new clinical data warehouse architecture to handle the integration of disparate clinical databases in the institution. By examining the way downstream systems are populated and streamlining the way data is stored we create a virtual clinical data warehouse that is adaptable to future needs of the organization.

  1. A Simulation Modeling Approach Method Focused on the Refrigerated Warehouses Using Design of Experiment

    Science.gov (United States)

    Cho, G. S.

    2017-09-01

    For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.

  2. Development of National Health Data Warehouse for Data Mining

    Directory of Open Access Journals (Sweden)

    Shahidul Islam Khan

    2015-07-01

    Full Text Available Health informatics is currently one of the top focuses of computer science researchers. Availability of timely and accurate data is essential for medical decision making. Health care organizations face a common problem with the large amount of data they have in numerous systems. Researchers, health care providers and patients will not be able to utilize the knowledge stored in different repositories unless amalgamate the information from disparate sources is done. This problem can be solved by Data warehousing. Data warehousing techniques share a common set of tasks, include requirements analysis, data design, architectural design, implementation and deployment. Developing health data warehouse is complex and time consuming but is also essential to deliver quality health services. This paper depicts prospects and complexities of health data warehousing and mining and illustrate a data-warehousing model suitable for integrating data from different health care sources to discover effective knowledge.

  3. MouseMine: a new data warehouse for MGI.

    Science.gov (United States)

    Motenko, H; Neuhauser, S B; O'Keefe, M; Richardson, J E

    2015-08-01

    MouseMine (www.mousemine.org) is a new data warehouse for accessing mouse data from Mouse Genome Informatics (MGI). Based on the InterMine software framework, MouseMine supports powerful query, reporting, and analysis capabilities, the ability to save and combine results from different queries, easy integration into larger workflows, and a comprehensive Web Services layer. Through MouseMine, users can access a significant portion of MGI data in new and useful ways. Importantly, MouseMine is also a member of a growing community of online data resources based on InterMine, including those established by other model organism databases. Adopting common interfaces and collaborating on data representation standards are critical to fostering cross-species data analysis. This paper presents a general introduction to MouseMine, presents examples of its use, and discusses the potential for further integration into the MGI interface.

  4. Flow shop scheduling algorithm to optimize warehouse activities

    Directory of Open Access Journals (Sweden)

    P. Centobelli

    2016-01-01

    Full Text Available Successful flow-shop scheduling outlines a more rapid and efficient process of order fulfilment in warehouse activities. Indeed the way and the speed of order processing and, in particular, the operations concerning materials handling between the upper stocking area and a lower forward picking one must be optimized. The two activities, drops and pickings, have considerable impact on important performance parameters for Supply Chain wholesaler companies. In this paper, a new flow shop scheduling algorithm is formulated in order to process a greater number of orders by replacing the FIFO logic for the drops activities of a wholesaler company on a daily basis. The System Dynamics modelling and simulation have been used to simulate the actual scenario and the output solutions. Finally, a t-Student test validates the modelled algorithm, granting that it can be used for all wholesalers based on drop and picking activities.

  5. Evolutionary Multiobjective Query Workload Optimization of Cloud Data Warehouses

    Science.gov (United States)

    Dokeroglu, Tansel; Sert, Seyyit Alper; Cinar, Muhammet Serkan

    2014-01-01

    With the advent of Cloud databases, query optimizers need to find paretooptimal solutions in terms of response time and monetary cost. Our novel approach minimizes both objectives by deploying alternative virtual resources and query plans making use of the virtual resource elasticity of the Cloud. We propose an exact multiobjective branch-and-bound and a robust multiobjective genetic algorithm for the optimization of distributed data warehouse query workloads on the Cloud. In order to investigate the effectiveness of our approach, we incorporate the devised algorithms into a prototype system. Finally, through several experiments that we have conducted with different workloads and virtual resource configurations, we conclude remarkable findings of alternative deployments as well as the advantages and disadvantages of the multiobjective algorithms we propose. PMID:24892048

  6. Presence survival spores of Bacillus thuringiensis varieties in grain warehouse

    Directory of Open Access Journals (Sweden)

    Sánchez-Yáñez Juan Manuel

    2016-08-01

    Full Text Available Genus Bacillus thuringiensis (Bt synthesized spores and crystals toxic to pest-insects in agriculture. Bt is comospolitan then possible to isolate some subspecies or varieties from warehouse. The aims of study were: i to isolate Bt varieties from grain at werehouse ii to evaluate Bt toxicity on Spodoptera frugiperda and Shit-ophilus zeamaisese iii to analyze Bt spores persistence in Zea mays grains at werehouse compared to same Bt on grains exposed to sun radiation. Results showed that at werehouse were recovered more than one variety of Bt spores. According to each isolate Bt1 o Bt2 were toxic to S. frugiperda or S. zeamaisese. One those Bt belong to var morrisoni. At werehouse these spores on Z. mays grains surviving more time, while the same spores exposed to boicide sun radiation they died.

  7. Urban freight distribution: council warehouses & freight by rail

    Directory of Open Access Journals (Sweden)

    Aleksander SŁADKOWSKI

    2014-10-01

    Full Text Available Rail is the one of the highly underused form of freight transportation in the European Union. Majority of the freightage are distributed by trucks and HGVs. With new regulations and socio-environmental concerns urban logistics is facing a new challenge which can be tackled using innovative transport mechanisms and streamline operations. This article sheds light on a system which integrates freight distribution via metro lines in the closest vicinity of the customer, use of council warehouses and further innovative transport mechanisms for final delivery. This system uses existing infrastructure effectively without impacting its surroundings and triggers the reduction of polluting carriers. This system offers the option of immediate implementation which will enable EU to compete with a global freight distribution market.

  8. Towards a Modernization Process for Secure Data Warehouses

    Science.gov (United States)

    Blanco, Carlos; Pérez-Castillo, Ricardo; Hernández, Arnulfo; Fernández-Medina, Eduardo; Trujillo, Juan

    Data Warehouses (DW) manage crucial enterprise information used for the decision making process which has to be protected from unauthorized accesses. However, security constraints are not properly integrated in the complete DWs’ development process, being traditionally considered in the last stages. Furthermore, legacy systems need a reverse engineering process in order to accomplish re-documentation for detecting new security requirements as well as system’s design recovery to enable migration and reuse. Thus, we have proposed a model driven architecture (MDA) for secure DWs which takes into account security issues from the early stages of development and provides automatic transformations between models. This paper fulfills this architecture providing an architecture-driven modernization (ADM) process focused on obtaining conceptual security models from legacy OLAP systems.

  9. Data warehouse for detection of occupational diseases in OHS data.

    Science.gov (United States)

    Godderis, L; Mylle, G; Coene, M; Verbeek, C; Viaene, B; Bulterys, S; Schouteden, M

    2015-11-01

    Occupational health and safety (OHS) services collect a wide range of data during health surveillance. To build a 'data warehouse' to make OHS data available for research and to investigate sector-specific health problems. Medical data were extracted, transformed and loaded into the data warehouse. After validation, data on lifestyle, categorized medication use, ICD-9-CM encoded sickness absences and health complaints, collected between 2010 and 2014, were analysed with logistic regression to compare proportions between employment sectors, taking into account age, gender, body mass index (BMI) and year of examination. The data set comprised 585000 employees. Average age and employment seniority were 39 ± 12 and 8 ± 9 years, respectively. BMI was 26 ± 5 kg/m(2). Health complaints, medication use and sickness absence significantly increased with BMI and age. The proportion of employees with health problems was highest in health care (64%), government (61%) and manufacturing (60%) and lowest in the service sector. In all sectors, 10% of workers reported locomotor health problems, apart from the service sector (8%) with similar results for medication consumption. Neuropsychological drugs were more frequently used by health care workers (8%). The transport sector contained the highest proportion of cardiological medication users (12%). Finally, 30-59% of employees reported at least one sickness absence episode. Sickness absence due to locomotor issues was highest in manufacturing (11%) and health care (10%), followed by government (9%) and construction (9%). Significant differences in indices of workers' health were observed between sectors. This information is now being used in the implementation of a sector-oriented health surveillance programme. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Implementation of Advanced Warehouses in a Hospital Environment - Case study

    Science.gov (United States)

    Costa, J.; Sameiro Carvalho, M.; Nobre, A.

    2015-05-01

    In Portugal, there is an increase of costs in the healthcare sector due to several factors such as the aging of the population, the increased demand for health care services and the increasing investment in new technologies. Thus, there is a need to reduce costs, by presenting the effective and efficient management of logistics supply systems with enormous potential to achieve savings in health care organizations without compromising the quality of the provided service, which is a critical factor, in this type of sector. In this research project the implementation of Advanced Warehouses has been studied, in the Hospital de Braga patient care units, based in a mix of replenishment systems approaches: the par level system, the two bin system and the consignment model. The logistics supply process is supported by information technology (IT), allowing a proactive replacement of products, based on the hospital services consumption records. The case study was developed in two patient care units, in order to study the impact of the operation of the three replenishment systems. Results showed that an important inventory holding costs reduction can be achieved in the patient care unit warehouses while increasing the service level and increasing control of incoming and stored materials with less human resources. The main conclusion of this work illustrates the possibility of operating multiple replenishment models, according to the types of materials that healthcare organizations deal with, so that they are able to provide quality health care services at a reduced cost and economically sustainable. The adoption of adequate IT has been shown critical for the success of the project.

  11. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. I: test of the methodology

    International Nuclear Information System (INIS)

    Berrezueta, E.; Castroviejo, R.

    2007-01-01

    Ore microscopy has traditionally been an important support to control ore processing, but the volume of present day processes is beyond the reach of human operators. Automation is therefore compulsory, but its development through digital image analysis, DIA, is limited by various problems, such as the similarity in reflectance values of some important ores, their anisotropism, and the performance of instruments and methods. The results presented show that automated identification and quantification by DIA are possible through multiband (RGB) determinations with a research 3CCD video camera on reflected light microscope. These results were obtained by systematic measurement of selected ores accounting for most of the industrial applications. Polarized light is avoided, so the effects of anisotropism can be neglected. Quality control at various stages and statistical analysis are important, as is the application of complementary criteria (e.g. metallogenetic). The sequential methodology is described and shown through practical examples. (Author)

  12. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. II: metallogenetic discriminating criteria

    International Nuclear Information System (INIS)

    Castroviejo, R.; Berrezueta, E.

    2009-01-01

    ore microscopy may furnish very important information for geo metallurgists, but todays needs for automation are difficult to meet with the optical microscope unless and adequate methodology is developed. Some limitations of the routine procedure, related to risks of mis identification caused by the spectral similarity of some ores, ask for complementary criteria. Defining ore deposit typologies and the corresponding assemblages guides the choice of species and limits the number. Comparison of the reflectance values of the ores in each mineral association defined shows that their automated identification is possible in most of the common occurrence. The number of species to be actually considered being greatly limited, performance is increased. The system is not intended to substitute for a mineralogist, but to enhance enormously his performance, while offering the industry an economic procedure to procedure a wealth of information which would not be possible with traditional methods, as the point counter. (Author) 33 refs.

  13. Controlled Substance Reconciliation Accuracy Improvement Using Near Real-Time Drug Transaction Capture from Automated Dispensing Cabinets.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry

    2016-06-01

    Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014

  14. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  15. 77 FR 20353 - United States Warehouse Act; Export Food Aid Commodities Licensing Agreement

    Science.gov (United States)

    2012-04-04

    ... licensing agreement include, but are not limited to, corn soy blend, vegetable oil, and pulses such as peas, beans, and lentils. USWA licensing is a voluntary program. Warehouse operators that apply for USWA...

  16. 76 FR 13972 - United States Warehouse Act; Export Food Aid Commodities Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ..., nuts, cottonseed, and dry beans. Warehouse operators that apply voluntarily agree to be licensed... program for port and transload facility operators storing EFAC. This proposal is in response to the...

  17. Warehouse design and product assignment and allocation: A mathematical programming model

    OpenAIRE

    Geraldes, Carla A. S.; Carvalho, Maria Sameiro; Pereira, Guilherme

    2012-01-01

    Warehouses can be considered one of the most important nodes in supply chains. The dynamic nature of today's markets compels organizations to an incessant reassessment in an effort to respond to continuous challenges. Therefore warehouses must be continually re-evaluated to ensure that they are consistent with both market's demands and management's strategies. In this paper we discuss a mathematical programming model aiming to support product assignment and allocation to the functional areas ...

  18. The visit-data warehouse: enabling novel secondary use of health information exchange data.

    Science.gov (United States)

    Fleischman, William; Lowry, Tina; Shapiro, Jason

    2014-01-01

    Health Information Exchange (HIE) efforts face challenges with data quality and performance, and this becomes especially problematic when data is leveraged for uses beyond primary clinical use. We describe a secondary data infrastructure focusing on patient-encounter, nonclinical data that was built on top of a functioning HIE platform to support novel secondary data uses and prevent potentially negative impacts these uses might have otherwise had on HIE system performance. HIE efforts have generally formed for the primary clinical use of individual clinical providers searching for data on individual patients under their care, but many secondary uses have been proposed and are being piloted to support care management, quality improvement, and public health. This infrastructure review describes a module built into the Healthix HIE. Healthix, based in the New York metropolitan region, comprises 107 participating organizations with 29,946 acute-care beds in 383 facilities, and includes more than 9.2 million unique patients. The primary infrastructure is based on the InterSystems proprietary Caché data model distributed across servers in multiple locations, and uses a master patient index to link individual patients' records across multiple sites. We built a parallel platform, the "visit data warehouse," of patient encounter data (demographics, date, time, and type of visit) using a relational database model to allow accessibility using standard database tools and flexibility for developing secondary data use cases. These four secondary use cases include the following: (1) tracking encounter-based metrics in a newly established geriatric emergency department (ED), (2) creating a dashboard to provide a visual display as well as a tabular output of near-real-time de-identified encounter data from the data warehouse, (3) tracking frequent ED users as part of a regional-approach to case management intervention, and (4) improving an existing quality improvement program

  19. Logistics Cost Calculation of Implementation Warehouse Management System: A Case Study

    Directory of Open Access Journals (Sweden)

    Kučera Tomáš

    2017-01-01

    Full Text Available Warehouse management system can take full advantage of the resources and provide efficient warehousing services. The paper aims to show advantages and disadvantages of the warehouse management system in a chosen enterprise, which is focused on logistics services and transportation. The paper can bring new innovative approach for warehousing and presents how logistics enterprise can reduce logistics costs. This approach includes cost reduction of the establishment, operation and savings in the overall assessment of the implementation of the warehouse management system. The innovative warehouse management system will be demonstrated as the case study, which is classified as a qualitative scientific method, in the chosen logistics enterprise. The paper is based on the research of the world literature, analyses of the internal logistics processes, data and finally enterprise documents. The paper discovers costs related to personnel costs, handling equipment costs and costs for material identification. Implementation of the warehouse management system will reduce overall logistics costs of warehousing and extend the warehouse management system to other parts of the logistics chain.

  20. A Novel Optimization Method on Logistics Operation for Warehouse & Port Enterprises Based on Game Theory

    Directory of Open Access Journals (Sweden)

    Junyang Li

    2013-09-01

    Full Text Available Purpose: The following investigation aims to deal with the competitive relationship among different warehouses & ports in the same company. Design/methodology/approach: In this paper, Game Theory is used in carrying out the optimization model. Genetic Algorithm is used to solve the model. Findings: Unnecessary competition will rise up if there is little internal communication among different warehouses & ports in one company. This paper carries out a novel optimization method on warehouse & port logistics operation model. Originality/value: Warehouse logistics business is a combination of warehousing services and terminal services which is provided by port logistics through the existing port infrastructure on the basis of a port. The newly proposed method can help to optimize logistics operation model for warehouse & port enterprises effectively. We set Sinotrans Guangdong Company as an example to illustrate the newly proposed method. Finally, according to the case study, this paper gives some responses and suggestions on logistics operation in Sinotrans Guangdong warehouse & port for its future development.

  1. Financing agribusiness: Insurance coverage as protection against credit risk of warehouse receipt collateral

    Directory of Open Access Journals (Sweden)

    Jovičić Daliborka

    2017-01-01

    Full Text Available Financing agribusiness by warehouse receipts allows the agricultural producers to obtain working capital on the basis of agricultural products stored in licensed warehouses, as collateral. The implementation of the system of licensed warehouses and issuance of warehouse receipts as collateral for obtaining a bank loan is supported by the European Bank for Reconstruction and Development and it has had positive results in the neighbouring countries. The precondition for financing this project was to establish a Compensation Fund for providing insurance coverage for licensed warehouses against professional liability. However, in the lack of an adequate legal framework, the operational risk is possible to occur. Bearing in mind that Serbia has a tradition in insurance industry and a number of operating insurance companies, the issue is that of the economic benefit and the method of insuring against this risk. The paper will present a detailed analysis of the operation of the Fund, capital requirement, solvency margin and a critical review of the Law on Public Warehouses which regulates the rights and obligations of the Compensation Fund in the case of loss occurrence.

  2. An improved, automated whole air sampler and gas chromatography mass spectrometry analysis system for volatile organic compounds in the atmosphere

    Science.gov (United States)

    Lerner, Brian M.; Gilman, Jessica B.; Aikin, Kenneth C.; Atlas, Elliot L.; Goldan, Paul D.; Graus, Martin; Hendershot, Roger; Isaacman-VanWertz, Gabriel A.; Koss, Abigail; Kuster, William C.; Lueb, Richard A.; McLaughlin, Richard J.; Peischl, Jeff; Sueper, Donna; Ryerson, Thomas B.; Tokarek, Travis W.; Warneke, Carsten; Yuan, Bin; de Gouw, Joost A.

    2017-01-01

    Volatile organic compounds were quantified during two aircraft-based field campaigns using highly automated, whole air samplers with expedited post-flight analysis via a new custom-built, field-deployable gas chromatography-mass spectrometry instrument. During flight, air samples were pressurized with a stainless steel bellows compressor into electropolished stainless steel canisters. The air samples were analyzed using a novel gas chromatograph system designed specifically for field use which eliminates the need for liquid nitrogen. Instead, a Stirling cooler is used for cryogenic sample pre-concentration at temperatures as low as -165 °C. The analysis system was fully automated on a 20 min cycle to allow for unattended processing of an entire flight of 72 sample canisters within 30 h, thereby reducing typical sample residence times in the canisters to less than 3 days. The new analytical system is capable of quantifying a wide suite of C2 to C10 organic compounds at part-per-trillion sensitivity. This paper describes the sampling and analysis systems, along with the data analysis procedures which include a new peak-fitting software package for rapid chromatographic data reduction. Instrument sensitivities, uncertainties and system artifacts are presented for 35 trace gas species in canister samples. Comparisons of reported mixing ratios from each field campaign with measurements from other instruments are also presented.

  3. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  4. Reduction in energy consumption and operating cost in a dried corn warehouse using logistics techniques

    Directory of Open Access Journals (Sweden)

    Korrakot Y. Tippayawong

    2013-06-01

    Full Text Available Corn is one of the major economic crops in Thailand. Corn postharvest operation involves various practices that consume a large amount of energy. Different energy conservation measures have been implemented but logistics consideration is not normally employed. In this work, attempt has been made to demonstrate that logistics techniques can offer a significant reduction in energy and cost. The main objective of this work is to identify and demonstrate possible approaches to improving energy efficiency and reducing operating cost for a dried corn warehouse operator. Three main problems are identified: (i relatively high fuel consumption for internal transfer process, (ii low quality of dried corn, and (iii excess expenditure on outbound transportation. Solutions are proposed and implemented using logistics operations. Improvement is achieved using plant layout and shortest path techniques, resulting in a reduction of almost 50% in energy consumption for the internal transfer process. Installation of an air distributor in the grain storage unit results in a decrease in loss due to poor-quality dried corn from 17% to 10%. Excess expenditure on dried corn distribution is reduced by 6% with application of a global positioning system.

  5. Desain Sistem Semantic Data Warehouse dengan Metode Ontology dan Rule Based untuk Mengolah Data Akademik Universitas XYZ di Bali

    Directory of Open Access Journals (Sweden)

    Made Pradnyana Ambara

    2016-06-01

    Full Text Available Data warehouse pada umumnya yang sering dikenal data warehouse tradisional mempunyai beberapa kelemahan yang mengakibatkan kualitas data yang dihasilkan tidak spesifik dan efektif. Sistem semantic data warehouse merupakan solusi untuk menangani permasalahan pada data warehouse tradisional dengan kelebihan antara lain: manajeman kualitas data yang spesifik dengan format data seragam untuk mendukung laporan OLAP yang baik, dan performance pencarian informasi yang lebih efektif dengan kata kunci bahasa alami. Pemodelan sistem semantic data warehouse menggunakan metode ontology menghasilkan model resource description framework schema (RDFS logic yang akan ditransformasikan menjadi snowflake schema. Laporan akademik yang dibutuhkan dihasilkan melalui metode nine step Kimball dan pencarian semantic menggunakan metode rule based. Pengujian dilakukan menggunakan dua metode uji yaitu pengujian dengan black box testing dan angket kuesioner cheklist. Dari hasil penelitian ini dapat disimpulkan bahwa sistem semantic data warehouse dapat membantu proses pengolahan data akademik yang menghasilkan laporan yang berkualitas untuk mendukung proses pengambilan keputusan.

  6. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  7. DW4TR: A Data Warehouse for Translational Research.

    Science.gov (United States)

    Hu, Hai; Correll, Mick; Kvecher, Leonid; Osmond, Michelle; Clark, Jim; Bekhash, Anthony; Schwab, Gwendolyn; Gao, De; Gao, Jun; Kubatin, Vladimir; Shriver, Craig D; Hooke, Jeffrey A; Maxwell, Larry G; Kovatich, Albert J; Sheldon, Jonathan G; Liebman, Michael N; Mural, Richard J

    2011-12-01

    The linkage between the clinical and laboratory research domains is a key issue in translational research. Integration of clinicopathologic data alone is a major task given the number of data elements involved. For a translational research environment, it is critical to make these data usable at the point-of-need. Individual systems have been developed to meet the needs of particular projects though the need for a generalizable system has been recognized. Increased use of Electronic Medical Record data in translational research will demand generalizing the system for integrating clinical data to support the study of a broad range of human diseases. To ultimately satisfy these needs, we have developed a system to support multiple translational research projects. This system, the Data Warehouse for Translational Research (DW4TR), is based on a light-weight, patient-centric modularly-structured clinical data model and a specimen-centric molecular data model. The temporal relationships of the data are also part of the model. The data are accessed through an interface composed of an Aggregated Biomedical-Information Browser (ABB) and an Individual Subject Information Viewer (ISIV) which target general users. The system was developed to support a breast cancer translational research program and has been extended to support a gynecological disease program. Further extensions of the DW4TR are underway. We believe that the DW4TR will play an important role in translational research across multiple disease types. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Automated Spatial Brain Normalization and Hindbrain White Matter Reference Tissue Give Improved [(18)F]-Florbetaben PET Quantitation in Alzheimer's Model Mice.

    Science.gov (United States)

    Overhoff, Felix; Brendel, Matthias; Jaworska, Anna; Korzhova, Viktoria; Delker, Andreas; Probst, Federico; Focke, Carola; Gildehaus, Franz-Josef; Carlsen, Janette; Baumann, Karlheinz; Haass, Christian; Bartenstein, Peter; Herms, Jochen; Rominger, Axel

    2016-01-01

    Preclinical PET studies of β-amyloid (Aβ) accumulation are of growing importance, but comparisons between research sites require standardized and optimized methods for quantitation. Therefore, we aimed to evaluate systematically the (1) impact of an automated algorithm for spatial brain normalization, and (2) intensity scaling methods of different reference regions for Aβ-PET in a large dataset of transgenic mice. PS2APP mice in a 6 week longitudinal setting (N = 37) and another set of PS2APP mice at a histologically assessed narrow range of Aβ burden (N = 40) were investigated by [(18)F]-florbetaben PET. Manual spatial normalization by three readers at different training levels was performed prior to application of an automated brain spatial normalization and inter-reader agreement was assessed by Fleiss Kappa (κ). For this method the impact of templates at different pathology stages was investigated. Four different reference regions on brain uptake normalization were used to calculate frontal cortical standardized uptake value ratios (SUVRCTX∕REF), relative to raw SUVCTX. Results were compared on the basis of longitudinal stability (Cohen's d), and in reference to gold standard histopathological quantitation (Pearson's R). Application of an automated brain spatial normalization resulted in nearly perfect agreement (all κ≥0.99) between different readers, with constant or improved correlation with histology. Templates based on inappropriate pathology stage resulted in up to 2.9% systematic bias for SUVRCTX∕REF. All SUVRCTX∕REF methods performed better than SUVCTX both with regard to longitudinal stability (d≥1.21 vs. d = 0.23) and histological gold standard agreement (R≥0.66 vs. R≥0.31). Voxel-wise analysis suggested a physiologically implausible longitudinal decrease by global mean scaling. The hindbrain white matter reference (R mean = 0.75) was slightly superior to the brainstem (R mean = 0.74) and the cerebellum (R mean = 0.73). Automated

  9. Leveraging a Statewide Clinical Data Warehouse to Expand Boundaries of the Learning Health System.

    Science.gov (United States)

    Turley, Christine B; Obeid, Jihad; Larsen, Rick; Fryar, Katrina M; Lenert, Leslie; Bjorn, Arik; Lyons, Genevieve; Moskowitz, Jay; Sanderson, Iain

    2016-01-01

    Learning Health Systems (LHS) require accessible, usable health data and a culture of collaboration-a challenge for any single system, let alone disparate organizations, with macro- and micro-systems. Recently, the National Science Foundation described this important setting as a cyber-social ecosystem. In 2004, in an effort to create a platform for transforming health in South Carolina, Health Sciences South Carolina (HSSC) was established as a research collaboration of the largest health systems, academic medical centers and research intensive universities in South Carolina. With work beginning in 2010, HSSC unveiled an integrated Clinical Data Warehouse (CDW) in 2013 as a crucial anchor to a statewide LHS. This CDW integrates data from independent health systems in near-real time, and harmonizes the data for aggregation and use in research. With records from over 2.7 million unique patients spanning 9 years, this multi-institutional statewide clinical research repository allows integrated individualized patient-level data to be used for multiple population health and biomedical research purposes. In the first 21 months of operation, more than 2,800 de-identified queries occurred through i2b2, with 116 users. HSSC has developed and implemented solutions to complex issues emphasizing anti-competitiveness and participatory governance, and serves as a recognized model to organizations working to improve healthcare quality by extending the traditional borders of learning health systems.

  10. A NEW HYBRID YIN-YANG-PAIR-PARTICLE SWARM OPTIMIZATION ALGORITHM FOR UNCAPACITATED WAREHOUSE LOCATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2017-09-01

    Full Text Available Yin-Yang-pair optimization (YYPO is one of the latest metaheuristic algorithms (MA proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL problems. This efficient hierarchical PSO-based optimizer (PSOYPO can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA, harmony search (HS, modified HS (OBCHS, and evolutionary simulated annealing (ESA. The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.

  11. a New Hybrid Yin-Yang Swarm Optimization Algorithm for Uncapacitated Warehouse Location Problems

    Science.gov (United States)

    Heidari, A. A.; Kazemizade, O.; Hakimpour, F.

    2017-09-01

    Yin-Yang-pair optimization (YYPO) is one of the latest metaheuristic algorithms (MA) proposed in 2015 that tries to inspire the philosophy of balance between conflicting concepts. Particle swarm optimizer (PSO) is one of the first population-based MA inspired by social behaviors of birds. In spite of PSO, the YYPO is not a nature inspired optimizer. It has a low complexity and starts with only two initial positions and can produce more points with regard to the dimension of target problem. Due to unique advantages of these methodologies and to mitigate the immature convergence and local optima (LO) stagnation problems in PSO, in this work, a continuous hybrid strategy based on the behaviors of PSO and YYPO is proposed to attain the suboptimal solutions of uncapacitated warehouse location (UWL) problems. This efficient hierarchical PSO-based optimizer (PSOYPO) can improve the effectiveness of PSO on spatial optimization tasks such as the family of UWL problems. The performance of the proposed PSOYPO is verified according to some UWL benchmark cases. These test cases have been used in several works to evaluate the efficacy of different MA. Then, the PSOYPO is compared to the standard PSO, genetic algorithm (GA), harmony search (HS), modified HS (OBCHS), and evolutionary simulated annealing (ESA). The experimental results demonstrate that the PSOYPO can reveal a better or competitive efficacy compared to the PSO and other MA.

  12. Improving the image quality of contrast-enhanced MR angiography by automated image registration: A prospective study in peripheral arterial disease of the lower extremities

    International Nuclear Information System (INIS)

    Menke, Jan

    2010-01-01

    Objective: If a patient has moved during digital subtraction angiography (DSA), manual pixel shift can improve the image quality. This study investigated whether such image registration can also improve the quality of contrast-enhanced magnetic resonance angiography (MRA) in patients with peripheral arterial disease of the lower extremities. Materials and methods: 404 leg MRAs of patients likely to have peripheral artery disease were included in this prospective study. The standard non-registered MRAs were compared to automatically linear, affine and warp registered MRAs by four image quality parameters, including the vessel detection probability (VDP) in maximum intensity projection (MIP) images and contrast-to-noise ratios (CNR). The different registration types were compared by analysis of variance. Results: All studied image quality parameters showed similar trends. Generally, registration improved the leg MRA quality significantly (P < 0.05). The 12% of lower legs with a body shift of 1 mm or more showed the highest gain in image quality when using linear registration instead of no registration, with an average VDP gain of 20-49%. Warp registration improved the image quality slightly further. Conclusion: Automated image registration can improve the MRA image quality especially in the lower legs, which is comparable to the effect of pixel shift in DSA.

  13. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  14. Study and application of data mining and data warehouse in CIMS

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Liu, Daxin

    2003-03-01

    The interest in analyzing data has grown tremendously in recent years. To analyze data, a multitude of technologies is need, namely technologies from the fields of Data Warehouse, Data Mining, On-line Analytical Processing (OLAP). This paper gives a new architecture of data warehouse in CIMS according to CRGC-CIMS application engineering. The data source of this architecture comes from database of CRGC-CIMS system. The data is put in global data set by extracting, filtrating and integrating, and then the data is translated to data warehouse according information request. We have addressed two advantages of the new model in CRGC-CIMS application. In addition, a Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support, OLAP queries or data mining. It is important to select the right view to materialize that answer a given set of queries. In this paper, we also have designed algorithms for selecting a set of views to be materialized in a data warehouse in order to answer the most queries under the constraint of given space. First, we give a cost model for selecting materialized views. Then we give the algorithms that adopt gradually recursive method from bottom to top. We give description and realization of algorithms. Finally, we discuss the advantage and shortcoming of our approach and future work.

  15. Improving the automated detection of refugee/IDP dwellings using the multispectral bands of the WorldView-2 satellite

    Science.gov (United States)

    Kemper, Thomas; Gueguen, Lionel; Soille, Pierre

    2012-06-01

    The enumeration of the population remains a critical task in the management of refugee/IDP camps. Analysis of very high spatial resolution satellite data proofed to be an efficient and secure approach for the estimation of dwellings and the monitoring of the camp over time. In this paper we propose a new methodology for the automated extraction of features based on differential morphological decomposition segmentation for feature extraction and interactive training sample selection from the max-tree and min-tree structures. This feature extraction methodology is tested on a WorldView-2 scene of an IDP camp in Darfur Sudan. Special emphasis is given to the additional available bands of the WorldView-2 sensor. The results obtained show that the interactive image information tool is performing very well by tuning the feature extraction to the local conditions. The analysis of different spectral subsets shows that it is possible to obtain good results already with an RGB combination, but by increasing the number of spectral bands the detection of dwellings becomes more accurate. Best results were obtained using all eight bands of WorldView-2 satellite.

  16. An improved method for the determination of trace levels of arsenic and antimony in geological materials by automated hydride generation-atomic absorption spectroscopy

    Science.gov (United States)

    Crock, J.G.; Lichte, F.E.

    1982-01-01

    An improved, automated method for the determination of arsenic and antimony in geological materials is described. After digestion of the material in sulfuric, nitric, hydrofluoric and perchloric acids, a hydrochloric acid solution of the sample is automatically mixed with reducing agents, acidified with additional hydrochloric acid, and treated with a sodium tetrahydroborate solution to form arsine and stibine. The hydrides are decomposed in a heated quartz tube in the optical path of an atomic absorption spectrometer. The absorbance peak height for arsenic or antimony is measured. Interferences that exist are minimized to the point where most geological materials including coals, soils, coal ashes, rocks and sediments can be analyzed directly without use of standard additions. The relative standard deviation of the digestion and the instrumental procedure is less than 2% at the 50 ??g l-1 As or Sb level. The reagent-blank detection limit is 0.2 ??g l-1 As or Sb. ?? 1982.

  17. Automated evaluation of ultrasonic indications

    International Nuclear Information System (INIS)

    Hansch, M.K.T.; Stegemann, D.

    1994-01-01

    Future requirements of reliability and reproducibility in quality assurance demand computer evaluation of defect indications. The ultrasonic method with its large field of applications and a high potential for automation provides all preconditions for fully automated inspection. The survey proposes several desirable hardware improvements, data acquisition requirements and software configurations. (orig.) [de

  18. Experiments and Pilot Study Evaluating the Performance of Reading Miscue Detector and Automated Reading Tutor for Filipino: A Children's Speech Technology for Improving Literacy

    Directory of Open Access Journals (Sweden)

    Ronald M. Pascual

    2017-06-01

    Full Text Available The latest advances in speech processing technology have allowed the development of automated reading tutors (ART for improving children's literacy. An ART is a computer-assisted learning system based on oral reading fluency (ORF instruction and automated speech recognition (ASR technology. However, the design of an ART system is language-specif ic, and thus, requires developing a system specif ically for the Filipino language. In a previous work, the authors have presented the development of the children's Filipino speech corpus (CFSC for the purpose of designing an ART in Filipino. In this paper, the authors present the evaluation of the ART in Filipino which integrates a reference verification (RV- and word duration analysis-based reading miscue detector (RMD, a user interface, and a feedback and instruction set. The authors also present the performance evaluation of the RMD in offline tests, and the effectiveness of the ART as shown by the results of the intervention program, a month-long pilot study that involved the use of the ART by a small group of students. Offline test results show that the RMD's performance (i.e., FA rate ≈ 3% and MDerr rate ≈ 5% is at par with those from state-of-the-art RMDs reported in the literature. The results of the ART intervention experiment showed that the students, on the average, have improved in their words correct per minute (WCPM rate by 4.66 times, in their ORF-16 scores by 6.0 times, and in their reading comprehension exam scores by 4.4 times, after using the ART.

  19. Automation in organizations: Eternal conflict

    Science.gov (United States)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  20. Improving the correlation of structural FEA models by the application of automated high density robotized laser Doppler vibrometry

    Science.gov (United States)

    Chowanietz, Maximilian; Bhangaonkar, Avinash; Semken, Michael; Cockrill, Martin

    2016-06-01

    Sound has had an intricate relation with the wellbeing of humans since time immemorial. It has the ability to enhance the quality of life immensely when present as music; at the same time, it can degrade its quality when manifested as noise. Hence, understanding its sources and the processes by which it is produced gains acute significance. Although various theories exist with respect to evolution of bells, it is indisputable that they carry millennia of cultural significance, and at least a few centuries of perfection with respect to design, casting and tuning. Despite the science behind its design, the nuances pertaining to founding and tuning have largely been empirical, and conveyed from one generation to the next. Post-production assessment for bells remains largely person-centric and traditional. However, progressive bell manufacturers have started adopting methods such as finite element analysis (FEA) for informing and optimising their future model designs. To establish confidence in the FEA process it is necessary to correlate the virtual model against a physical example. This is achieved by performing an experimental modal analysis (EMA) and comparing the results with those from FEA. Typically to collect the data for an EMA, the vibratory response of the structure is measured with the application of accelerometers. This technique has limitations; principally these are the observer effect and limited geometric resolution. In this paper, 3-dimensional laser Doppler vibrometry (LDV) has been used to measure the vibratory response with no observer effect due to the non-contact nature of the technique; resulting in higher accuracy measurements as the input to the correlation process. The laser heads were mounted on an industrial robot that enables large objects to be measured and extensive data sets to be captured quickly through an automated process. This approach gives previously unobtainable geometric resolution resulting in a higher confidence EMA. This is

  1. A robotic system for automation of logistics functions on the Space Station

    Science.gov (United States)

    Martin, J. C.; Purves, R. B.; Hosier, R. N.; Krein, B. A.

    1988-01-01

    Spacecraft inventory management is currently performed by the crew and as systems become more complex, increased crew time will be required to perform routine logistics activities. If future spacecraft are to function effectively as research labs and production facilities, the efficient use of crew time as a limited resource for performing mission functions must be employed. The use of automation and robotics technology, such as automated warehouse and materials handling functions, can free the crew from many logistics tasks and provide more efficient use of crew time. Design criteria for a Space Station Automated Logistics Inventory Management System is focused on through the design and demonstration of a mobile two armed terrestrial robot. The system functionally represents a 0 gravity automated inventory management system and the problems associated with operating in such an environment. Features of the system include automated storage and retrieval, item recognition, two armed robotic manipulation, and software control of all inventory item transitions and queries.

  2. Implementasi Data Warehouse dan Data Mining: Studi Kasus Analisis Peminatan Studi Siswa

    Directory of Open Access Journals (Sweden)

    Eka Miranda

    2011-06-01

    Full Text Available This paper discusses the implementation of data mining and their role in helping decision-making related to students’ specialization program selection. Currently, the university uses a database to store records of transactions which can not directly be used to assist analysis and decision making. Based on these issues then made the data warehouse design used to store large amounts of data and also has the potential to gain new data distribution perspectives and allows to answer the ad hoc question as well as to perform data analysis. The method used consists of: record analysis related to students’ academic achievement, designing data warehouse and data mining. The paper’s results are in a form of data warehouse and data mining design and its implementation with the classification techniques and association rules. From these results can be seen the students’ tendency and pattern background in choosing the specialization, to help them make decisions. 

  3. Data Delivery and Mapping Over the Web: National Water-Quality Assessment Data Warehouse

    Science.gov (United States)

    Bell, Richard W.; Williamson, Alex K.

    2006-01-01

    The U.S. Geological Survey began its National Water-Quality Assessment (NAWQA) Program in 1991, systematically collecting chemical, biological, and physical water-quality data from study units (basins) across the Nation. In 1999, the NAWQA Program developed a data warehouse to better facilitate national and regional analysis of data from 36 study units started in 1991 and 1994. Data from 15 study units started in 1997 were added to the warehouse in 2001. The warehouse currently contains and links the following data: -- Chemical concentrations in water, sediment, and aquatic-organism tissues and related quality-control data from the USGS National Water Information System (NWIS), -- Biological data for stream-habitat and ecological-community data on fish, algae, and benthic invertebrates, -- Site, well, and basin information associated with thousands of descriptive variables derived from spatial analysis, like land use, soil, and population density, and -- Daily streamflow and temperature information from NWIS for selected sampling sites.

  4. Roadmap to a Comprehensive Clinical Data Warehouse for Precision Medicine Applications in Oncology.

    Science.gov (United States)

    Foran, David J; Chen, Wenjin; Chu, Huiqi; Sadimin, Evita; Loh, Doreen; Riedlinger, Gregory; Goodell, Lauri A; Ganesan, Shridar; Hirshfield, Kim; Rodriguez, Lorna; DiPaola, Robert S

    2017-01-01

    Leading institutions throughout the country have established Precision Medicine programs to support personalized treatment of patients. A cornerstone for these programs is the establishment of enterprise-wide Clinical Data Warehouses. Working shoulder-to-shoulder, a team of physicians, systems biologists, engineers, and scientists at Rutgers Cancer Institute of New Jersey have designed, developed, and implemented the Warehouse with information originating from data sources, including Electronic Medical Records, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology and Pathology archives, and Next Generation Sequencing services. Innovative solutions were implemented to detect and extract unstructured clinical information that was embedded in paper/text documents, including synoptic pathology reports. Supporting important precision medicine use cases, the growing Warehouse enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information of patient tumors individually or as part of large cohorts to identify changes and patterns that may influence treatment decisions and potential outcomes.

  5. From capturing nursing knowledge to retrieval of data from a data warehouse.

    Science.gov (United States)

    Thoroddsen, Asta; Guðjónsdóttir, Hanna K; Guðjónsdóttir, Elisabet

    2014-01-01

    The purpose of the project was to capture nursing data and knowledge, represent it for use and re-use by retrieval from a data warehouse, which contains both clinical and financial hospital data. Today nurses at LUH use standardized nursing terminologies to document information related to patients and the nursing care in the EHR at all times. Pre-defined order sets for nursing care have been developed using best practice where available and tacit nursing knowledge has been captured and coded with standardized nursing terminologies and made explicit for dissemination in the EHR. All patient-nursing data is permanently stored in a data repository. Core nursing data elements have been selected for transfer and storage in the data warehouse and patient-nursing data are now captured, stored, can be related to other data elements from the warehouse and be retrieved for use and re-use.

  6. Data warehouse for assessing animal health, welfare, risk management and -communication.

    Science.gov (United States)

    Nielsen, Annette Cleveland

    2011-01-01

    The objective of this paper is to give an overview of existing databases in Denmark and describe some of the most important of these in relation to establishment of the Danish Veterinary and Food Administrations' veterinary data warehouse. The purpose of the data warehouse and possible use of the data are described. Finally, sharing of data and validity of data is discussed. There are databases in other countries describing animal husbandry and veterinary antimicrobial consumption, but Denmark will be the first country relating all data concerning animal husbandry, -health and -welfare in Danish production animals to each other in a data warehouse. Moreover, creating access to these data for researchers and authorities will hopefully result in easier and more substantial risk based control, risk management and risk communication by the authorities and access to data for researchers for epidemiological studies in animal health and welfare.

  7. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    Science.gov (United States)

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  8. The Impact of E-Commerce Development on the Warehouse Space Market in Poland

    Directory of Open Access Journals (Sweden)

    Dembińska Izabela

    2016-12-01

    Full Text Available The subject of discussion in the article is the impact of e-commerce sector on the warehouse space market. On the basis of available reports, the development of e-commerce has been characterized in Poland, showing the dynamics and the type of change. The needs of e-commerce sector in the field of logistics, in particular in the area of storage, have been presented in the paper. These needs have been characterized and at the same time, how representatives of the warehouse space market are prepared to support companies in the e-commerce sector is also discussed. The considerations are illustrated by the changes that occur as a result of the development of e-commerce on the warehouse space market in Poland.

  9. Real Time Business Analytics for Buying or Selling Transaction on Commodity Warehouse Receipt System

    Science.gov (United States)

    Djatna, Taufik; Teniwut, Wellem A.; Hairiyah, Nina; Marimin

    2017-10-01

    The requirement for smooth information such as buying and selling is essential for commodity warehouse receipt system such as dried seaweed and their stakeholders to transact for an operational transaction. Transactions of buying or selling a commodity warehouse receipt system are a risky process due to the fluctuations in dynamic commodity prices. An integrated system to determine the condition of the real time was needed to make a decision-making transaction by the owner or prospective buyer. The primary motivation of this study is to propose computational methods to trace market tendency for either buying or selling processes. The empirical results reveal that feature selection gain ratio and k-NN outperforms other forecasting models, implying that the proposed approach is a promising alternative to the stock market tendency of warehouse receipt document exploration with accurate level rate is 95.03%.

  10. To the question about the layout of the racks in the warehouse

    Directory of Open Access Journals (Sweden)

    Ilesaliev D.I.

    2017-03-01

    Full Text Available Warehouses, which are located at points of transshipment of cargo from one type of transport on the other, play a sig-nificant role in the transformation of cargo to further the most effective transportation of goods. The location of racks and longitudinal passages are important in the work of transhipment warehouse. Typically, racks and longitudinal pas-sages are perpendicular to each other, the article proposes a radical change with the "euclidean advantage". This is an-other way of designing warehouses for efficiency overload packaged cargo in the supply chain. Purpose is to reduce the mileage for one cycle of the loader from loading and unloading areas to storage areas.

  11. Database Are Not Toasters: A Framework for Comparing Data Warehouse Appliances

    Science.gov (United States)

    Trajman, Omer; Crolotte, Alain; Steinhoff, David; Nambiar, Raghunath Othayoth; Poess, Meikel

    The success of Business Intelligence (BI) applications depends on two factors, the ability to analyze data ever more quickly and the ability to handle ever increasing volumes of data. Data Warehouse (DW) and Data Mart (DM) installations that support BI applications have historically been built using traditional architectures either designed from the ground up or based on customized reference system designs. The advent of Data Warehouse Appliances (DA) brings packaged software and hardware solutions that address performance and scalability requirements for certain market segments. The differences between DAs and custom installations make direct comparisons between them impractical and suggest the need for a targeted DA benchmark. In this paper we review data warehouse appliances by surveying thirteen products offered today. We assess the common characteristics among them and propose a classification for DA offerings. We hope our results will help define a useful benchmark for DAs.

  12. Heuristics for multi-item two-echelon spare parts inventory control problem with batch ordering in the central warehouse

    NARCIS (Netherlands)

    Topan, E.; Bayindir, Z.P.; Tan, T.

    2010-01-01

    We consider a multi-item two-echelon inventory system in which the central warehouse operates under a (Q;R) policy, and each local warehouse implements (S ¡ 1; S) policy. The objective is to find the policy parameters minimizing expected system-wide inventory holding and fixed ordering costs subject

  13. Quantitative performance of E-Scribe warehouse in detecting quality issues with digital annotated ECG data from healthy subjects.

    Science.gov (United States)

    Sarapa, Nenad; Mortara, Justin L; Brown, Barry D; Isola, Lamberto; Badilini, Fabio

    2008-05-01

    The US Food and Drug Administration recommends submission of digital electrocardiograms in the standard HL7 XML format into the electrocardiogram warehouse to support preapproval review of new drug applications. The Food and Drug Administration scrutinizes electrocardiogram quality by viewing the annotated waveforms and scoring electrocardiogram quality by the warehouse algorithms. Part of the Food and Drug Administration warehouse is commercially available to sponsors as the E-Scribe Warehouse. The authors tested the performance of E-Scribe Warehouse algorithms by quantifying electrocardiogram acquisition quality, adherence to QT annotation protocol, and T-wave signal strength in 2 data sets: "reference" (104 digital electrocardiograms from a phase I study with sotalol in 26 healthy subjects with QT annotations by computer-assisted manual adjustment) and "test" (the same electrocardiograms with an intentionally introduced predefined number of quality issues). The E-Scribe Warehouse correctly detected differences between the 2 sets expected from the number and pattern of errors in the "test" set (except for 1 subject with QT misannotated in different leads of serial electrocardiograms) and confirmed the absence of differences where none was expected. E-Scribe Warehouse scores below the threshold value identified individual electrocardiograms with questionable T-wave signal strength. The E-Scribe Warehouse showed satisfactory performance in detecting electrocardiogram quality issues that may impair reliability of QTc assessment in clinical trials in healthy subjects.

  14. Rancang Bangun Data Warehouse Untuk Analisis Kinerja Penjualan Pada Industri Dengan Model Spa-Dw

    Directory of Open Access Journals (Sweden)

    Randy Oktrima Putra

    2014-02-01

    Full Text Available A company, majorly company that active in commercial (profit orientation need to analyze their sales performance. By analyzing sales performance, company can increase their sales performance. One of method to analyze sales performance is by collecting historical data that relates to sales and then process that data so that produce information that show company sales performance.   A data warehouse is a set of data that has characteristic subject oriented, time variant, integrated, and nonvolatile that help company management in processing of decision making. Design of data warehouse is started from collecting data that relate to sales such as product, customer, sales area, sales transaction, etc. After collecting the data, next is data extraction and transformation. Data extraction is a process f or selecting data that will be loaded into data warehouse. Data transformation is making some change to the data afte r extracted to be more consistent. After transformation processing, data are loaded into data warehouse. Data in data warehouse is processed by OLAP (On Line Analytical Processing to produce information.  Information that are produced from data processing  by OLAP are chart and query reporting. Chart reporting are sales chart based on cement type, sales chart based on sales area, sales chart based on plant, monthly and year ly sales chart, and chart based on customer feedback. Query reporting are sales based on cement type, sales area, plant and customer.Keywords: Data warehouse; OLAP; Sales performance analysis; Ready mix market

  15. Warehouse Plan for the Multi-Canister Overpacks (MC0) and Baskets

    International Nuclear Information System (INIS)

    MARTIN, M.K.

    2000-01-01

    The Multi-Canister Overpacks (MCO) will contain spent nuclear fuel (SNF) removed from the K East and West Basins. The SNF will be placed in fuel storage baskets that will be stacked inside the MCOs. Approximately 400 MCOs and 21 70 baskets will be fabricated for this purpose. These MCOs, loaded with SNF, will be placed in interim storage in the Canister Storage Building (CSB) located in the 200 Area of the Hanford Site. The MCOs consist of different components/sub-assemblies that will be manufactured by one or more vendors. All component/sub-assemblies will be shipped to the Hanford Site Central Stores Warehouse, 2355 Stevens Drive, Building 1163 in the 1100 Area, for inspection and storage until these components are required at the CSB and K Basins. The MCO fuel storage baskets will be manufactured in the MCO basket fabrication shop located in Building 328 of the Hanford Site 300 Area. The MCO baskets will be inspected at the fabrication shop before shipment to the Central Stores Warehouse for storage. The MCO components and baskets will be stored as received from the manufacturer with specified protective coatings, wrappings, and packaging intact to maintain mechanical integrity of the components and to prevent corrosion. The components and baskets will be shipped as needed from the warehouse to the CSB and K Basins. This warehouse plan includes the requirements for receipt of MCO components and baskets from the manufacturers and storage at the Hanford Site Central Stores Warehouse. Transportation of the MCO components and baskets from the warehouse, unwrapping, and assembly of the MCOs are the responsibility of SNF Operations and are not included in this plan

  16. PERANCANGAN DAN IMPLEMENTASI DATA WAREHOUSE METEOROLOGI, KLIMATOLOGI, GEOFISIKA DAN BENCANA ALAM

    Directory of Open Access Journals (Sweden)

    Agus Safril

    2014-05-01

    Full Text Available BMKG telah memiliki data berasal dari beberapa sistem basis data historis (legacy system baik yang telah tersimpan dalam sistem informasi database maupun data dalam bentuk lembar kerja (worksheet. Data lama ini sering tidak digunakan ketika  sistem database baru dikembangkan. Agar data lama tetap dapat digunakan, diperlukan integrasi data lama dan baru. Data warehouse adalah konsep yang digunakan untuk mengintegrasikan data dalam penyimpanan sistem database terpadu BMKG. Integrasi data dilakukan dengan melakukan ekstraksi dari sumber data dengan mengambil item data yang diperlukan. Sumber data diperoleh dari sistem informasi yang ada di kelompok meteorologi, klimatologi dan geofisika. Proses integrasi data dimulai dengan ekstraksi (extraction kemudian dilakukan penyeragaman (transformation sehingga sesuai dengan format yang digunakan untuk kepentingan analisis. Selanjutnya dilakukan proses penyimpanan dalam data warehouse (loading. Prototipe data warehouse yang dibangun mencakup proses input data melalui ekstraksi data lama maupun data baru menggunakan media perangkat lunak akuisisi data. Hasil keluaran (output berupa laporan data dengan perioda data sesuai dengan kebutuhan.   The data collections of BMKG is captured from the legacy systems that is stored in the information systems or data worksheet. Sometimes the legacy system is not used when the new DBMS has been developed. In order the legacy system usefull for DBMS of BMKG, the data is integrated from the legacy systems to the new database systems. Data warehouse is the concept to integrate data to the BMKG Data Base Management System (DMBS. To integrate data, data is integrated the data sources from legacy systems that has been stored in the meteorology, climatology and geophysic information system. The next steps is transformed to data that has the format accordance with the weather analysis requirement. Finally, data must be loaded into the data warehouse.  The data warehouse

  17. The Challenges of Data Quality Evaluation in a Joint Data Warehouse.

    Science.gov (United States)

    Bae, Charles J; Griffith, Sandra; Fan, Youran; Dunphy, Cheryl; Thompson, Nicolas; Urchek, John; Parchman, Alandra; Katzan, Irene L

    2015-01-01

    The use of clinically derived data from electronic health records (EHRs) and other electronic clinical systems can greatly facilitate clinical research as well as operational and quality initiatives. One approach for making these data available is to incorporate data from different sources into a joint data warehouse. When using such a data warehouse, it is important to understand the quality of the data. The primary objective of this study was to determine the completeness and concordance of common types of clinical data available in the Knowledge Program (KP) joint data warehouse, which contains feeds from several electronic systems including the EHR. A manual review was performed of specific data elements for 250 patients from an EHR, and these were compared with corresponding elements in the KP data warehouse. Completeness and concordance were calculated for five categories of data including demographics, vital signs, laboratory results, diagnoses, and medications. In general, data elements for demographics, vital signs, diagnoses, and laboratory results were present in more cases in the source EHR compared to the KP. When data elements were available in both sources, there was a high concordance. In contrast, the KP data warehouse documented a higher prevalence of deaths and medications compared to the EHR. Several factors contributed to the discrepancies between data in the KP and the EHR-including the start date and frequency of data feeds updates into the KP, inability to transfer data located in nonstructured formats (e.g., free text or scanned documents), as well as incomplete and missing data variables in the source EHR. When evaluating the quality of a data warehouse with multiple data sources, assessing completeness and concordance between data set and source data may be better than designating one to be a gold standard. This will allow the user to optimize the method and timing of data transfer in order to capture data with better accuracy.

  18. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    Science.gov (United States)

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  19. Managing data quality in an existing medical data warehouse using business intelligence technologies.

    Science.gov (United States)

    Eaton, Scott; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti

    2008-11-06

    The Ohio State University Medical Center (OSUMC) Information Warehouse (IW) is a comprehensive data warehousing facility that provides providing data integration, management, mining, training, and development services to a diversity of customers across the clinical, education, and research sectors of the OSUMC. Providing accurate and complete data is a must for these purposes. In order to monitor the data quality of targeted data sets, an online scorecard has been developed to allow visualization of the critical measures of data quality in the Information Warehouse.

  20. Designing ETL Tools to Feed a Data Warehouse Based on Electronic Healthcare Record Infrastructure.

    Science.gov (United States)

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    Aim of this paper is to propose a methodology to design Extract, Transform and Load (ETL) tools in a clinical data warehouse architecture based on the Electronic Healthcare Record (EHR). This approach takes advantages on the use of this infrastructure as one of the main source of information to feed the data warehouse, taking also into account that clinical documents produced by heterogeneous legacy systems are structured using the HL7 CDA standard. This paper describes the main activities to be performed to map the information collected in the different types of document with the dimensional model primitives.

  1. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  2. An Advanced Data Warehouse for Integrating Large Sets of GPS Data

    DEFF Research Database (Denmark)

    Andersen, Ove; Krogh, Benjamin Bjerre; Thomsen, Christian

    2014-01-01

    GPS data recorded from driving vehicles is available from many sources and is a very good data foundation for answering traffic related queries. However, most approaches so far have not considered combining GPS data from many sources into a single data warehouse. Further, the integration of GPS...... data with fuel consumption data (from the so-called CAN bus in the vehicles) and weather data has not been done. In this paper, we propose a data warehouse design for handling GPS data, fuel consumption data, and weather data. The design is fully implemented in a running system using the Postgre...

  3. TA-60 Warehouse and Salvage SWPPP Rev 2 Jan 2017-Final

    Energy Technology Data Exchange (ETDEWEB)

    Burgin, Jillian Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-07

    The Stormwater Pollution Prevention Team (PPT) for the TA-60-0002 Salvage and Warehouse Area consists of operations and management personnel from the facility, Multi-Sector General Permitting (MSGP) stormwater personnel from Environmental Compliance Programs (EPC-CP) organization, and Deployed Environmental Professionals. The EPC-CP representative is responsible for Laboratory compliance under the National Pollutant Discharge Elimination System (NPDES) permit regulations. The team members are selected on the basis of their familiarity with the activities at the facility and the potential impacts of those activities on stormwater runoff. The Warehouse and Salvage Yard are a single shift operation; therefore, a member of the PPT is always present during operations.

  4. Implementación de un piloto del componente comercial del data warehouse de Etapatelecom

    OpenAIRE

    Vélez Iñiguez, Roberto José

    2008-01-01

    El sistema a desarrollar se enmarca en una arquitectura de Data Warehouse, cuyo objetivo es extraer información de los Sistemas Transaccionales disponibles en Etapatelecom para ser usada en una Base de Datos orientada a la toma de decisiones (Data Warehouse) para el Área Comercial de la Empresa. El procedimiento se inicia con un análisis de los requerimientos de los usuarios estratégicos del Área Comercial de Etapatelecom donde se identifican los indicadores de negocio (Medidas) y las Dimensi...

  5. Diseño, elaboración y explotación de un data warehouse para una institución sanitaria

    OpenAIRE

    Castillo Hernández, Iván

    2014-01-01

    Diseño, elaboración y explotación de un data warehouse para una institución sanitaria. Disseny, elaboració i explotació d'un data warehouse per a una institució sanitària. Bachelor thesis for the Computer Science program on Data warehouse.

  6. Evaluation of an Automated Keywording System.

    Science.gov (United States)

    Malone, Linda C.; And Others

    1990-01-01

    Discussion of automated indexing techniques focuses on ways to statistically document improvements in the development of an automated keywording system over time. The system developed by the Joint Chiefs of Staff to automate the storage, categorization, and retrieval of information from military exercises is explained, and performance measures are…

  7. Stereo vision based automated grasp planning

    International Nuclear Information System (INIS)

    Wilhelmsen, K.; Huber, L.; Silva, D.; Grasz, E.; Cadapan, L.

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm

  8. Rapid detection of enterovirus in cerebrospinal fluid by a fully-automated PCR assay is associated with improved management of aseptic meningitis in adult patients.

    Science.gov (United States)

    Giulieri, Stefano G; Chapuis-Taillard, Caroline; Manuel, Oriol; Hugli, Olivier; Pinget, Christophe; Wasserfallen, Jean-Blaise; Sahli, Roland; Jaton, Katia; Marchetti, Oscar; Meylan, Pascal

    2015-01-01

    Enterovirus (EV) is the most frequent cause of aseptic meningitis (AM). Lack of microbiological documentation results in unnecessary antimicrobial therapy and hospitalization. To assess the impact of rapid EV detection in cerebrospinal fluid (CSF) by a fully-automated PCR (GeneXpert EV assay, GXEA) on the management of AM. Observational study in adult patients with AM. Three groups were analyzed according to EV documentation in CSF: group A = no PCR or negative PCR (n=17), group B = positive real-time PCR (n = 20), and group C = positive GXEA (n = 22). Clinical, laboratory and health-care costs data were compared. Clinical characteristics were similar in the 3 groups. Median turn-around time of EV PCR decreased from 60 h (IQR (interquartile range) 44-87) in group B to 5h (IQR 4-11) in group C (p<0.0001). Median duration of antibiotics was 1 (IQR 0-6), 1 (0-1.9), and 0.5 days (single dose) in groups A, B, and C, respectively (p < 0.001). Median length of hospitalization was 4 days (2.5-7.5), 2 (1-3.7), and 0.5 (0.3-0.7), respectively (p < 0.001). Median hospitalization costs were $5458 (2676-6274) in group A, $2796 (2062-5726) in group B, and $921 (765-1230) in group C (p < 0.0001). Rapid EV detection in CSF by a fully-automated PCR improves management of AM by significantly reducing antibiotic use, hospitalization length and costs. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Evaluation of a Broad-Spectrum Partially Automated Adverse Event Surveillance System: A Potential Tool for Patient Safety Improvement in Hospitals With Limited Resources.

    Science.gov (United States)

    Saikali, Melody; Tanios, Alain; Saab, Antoine

    2017-11-21

    The aim of the study was to evaluate the sensitivity and resource efficiency of a partially automated adverse event (AE) surveillance system for routine patient safety efforts in hospitals with limited resources. Twenty-eight automated triggers from the hospital information system's clinical and administrative databases identified cases that were then filtered by exclusion criteria per trigger and then reviewed by an interdisciplinary team. The system, developed and implemented using in-house resources, was applied for 45 days of surveillance, for all hospital inpatient admissions (N = 1107). Each trigger was evaluated for its positive predictive value (PPV). Furthermore, the sensitivity of the surveillance system (overall and by AE category) was estimated relative to incidence ranges in the literature. The surveillance system identified a total of 123 AEs among 283 reviewed medical records, yielding an overall PPV of 52%. The tool showed variable levels of sensitivity across and within AE categories when compared with the literature, with a relatively low overall sensitivity estimated between 21% and 44%. Adverse events were detected in 23 of the 36 AE categories defined by an established harm classification system. Furthermore, none of the detected AEs were voluntarily reported. The surveillance system showed variable sensitivity levels across a broad range of AE categories with an acceptable PPV, overcoming certain limitations associated with other harm detection methods. The number of cases captured was substantial, and none had been previously detected or voluntarily reported. For hospitals with limited resources, this methodology provides valuable safety information from which interventions for quality improvement can be formulated.

  10. Development and implementation of a clinical and business intelligence system for the Florida health data warehouse.

    Science.gov (United States)

    AlHazme, Raed H; Rana, Arif M; De Lucca, Michael

    2014-01-01

    To develop and implement a Clinical and Business Intelligence (CBI) system for the Florida Health Data Warehouse (FHDW) in order to bridge the gap between Florida's healthcare stakeholders and the health data archived in FHWD. A gap analysis study has been conducted to evaluate the technological divide between the relevant users and FHWD health data, which is maintained by the Broward Regional Health Planning Council (BRHPC). The study revealed a gap between the health care data and the decision makers that utilize the FHDW data. To bridge the gap, a CBI system was proposed, developed and implemented by BRHPC as a viable solution to address this issue, using the System Development Life Cycle methodology. The CBI system was successfully implemented and yielded a number of positive outcomes. In addition to significantly shortening the time required to analyze the health data for decision-making processes, the solution also provided end-users with the ability to automatically track public health parameters. A large amount of data is collected and stored by various health care organizations at the local, state, and national levels. If utilized properly, such data can go a long way in optimizing health care services. CBI systems provide health care organizations with valuable insights for improving patient care, tracking trends for medical research, and for controlling costs. The CBI system has been found quite effective in bridging the gap between Florida's healthcare stake holders and FHDW health data. Consequently, the solution has improved in the planning and coordination of health care services for the state of Florida.

  11. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  12. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  13. Closing the Loop: Automated Data-Driven Cognitive Model Discoveries Lead to Improved Instruction and Learning Gains

    Science.gov (United States)

    Liu, Ran; Koedinger, Kenneth R.

    2017-01-01

    As the use of educational technology becomes more ubiquitous, an enormous amount of learning process data is being produced. Educational data mining seeks to analyze and model these data, with the ultimate goal of improving learning outcomes. The most firmly grounded and rigorous evaluation of an educational data mining discovery is whether it…

  14. Lean automation development : applying lean principles to the automation development process

    OpenAIRE

    Granlund, Anna; Wiktorsson, Magnus; Grahn, Sten; Friedler, Niklas

    2014-01-01

    By a broad empirical study it is indicated that automation development show potential of improvement. In the paper, 13 lean product development principles are contrasted to the automation development process and it is suggested why and how these principles can facilitate, support and improve the automation development process. The paper summarises a description of what characterises a lean automation development process and what consequences it entails. Main differences compared to current pr...

  15. Automated error-tolerant macromolecular structure determination from multidimensional nuclear Overhauser enhancement spectra and chemical shift assignments: improved robustness and performance of the PASD algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kuszewski, John J.; Thottungal, Robin Augustine [National Institutes of Health, Imaging Sciences Laboratory, Center for Information Technology (United States); Clore, G. Marius [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States)], E-mail: mariusc@mail.nih.gov; Schwieters, Charles D. [National Institutes of Health, Imaging Sciences Laboratory, Center for Information Technology (United States)], E-mail: Charles.Schwieters@nih.gov

    2008-08-15

    We report substantial improvements to the previously introduced automated NOE assignment and structure determination protocol known as PASD (Kuszewski et al. (2004) J Am Chem Soc 26:6258-6273). The improved protocol includes extensive analysis of input spectral data to create a low-resolution contact map of residues expected to be close in space. This map is used to obtain reasonable initial guesses of NOE assignment likelihoods which are refined during subsequent structure calculations. Information in the contact map about which residues are predicted to not be close in space is applied via conservative repulsive distance restraints which are used in early phases of the structure calculations. In comparison with the previous protocol, the new protocol requires significantly less computation time. We show results of running the new PASD protocol on six proteins and demonstrate that useful assignment and structural information is extracted on proteins of more than 220 residues. We show that useful assignment information can be obtained even in the case in which a unique structure cannot be determined.

  16. Classification of Automated Search Traffic

    Science.gov (United States)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  17. DOD Major Automated Information Systems: Improvements Can Be Made in Reporting Critical Changes and Clarifying Leadership Responsibility

    Science.gov (United States)

    2016-03-01

    Leadership Responsibility Report to Congressional Committees March 2016 GAO-16-336 United States Government Accountability Office United...INFORMATION SYSTEMS Improvements Can Be Made in Reporting Critical Changes and Clarifying Leadership Responsibility Why GAO Did This Study The National...milestone decisions for MAIS programs. AT&L can delegate decision authority for MAIS programs to a component head who may further delegate the authority

  18. Contrast-enhanced magnetic resonance angiography in carotid artery disease: does automated image registration improve image quality?

    International Nuclear Information System (INIS)

    Menke, Jan; Larsen, Joerg

    2009-01-01

    Contrast-enhanced magnetic resonance angiography (MRA) is a noninvasive imaging alternative to digital subtraction angiography (DSA) for patients with carotid artery disease. In DSA, image quality can be improved by shifting the mask image if the patient has moved during angiography. This study investigated whether such image registration may also help to improve the image quality of carotid MRA. Data from 370 carotid MRA examinations of patients likely to have carotid artery disease were prospectively collected. The standard nonregistered MRAs were compared to automatically linear, affine and warp registered MRA by using three image quality parameters: the vessel detection probability (VDP) in maximum intensity projection (MIP) images, contrast-to-noise ratio (CNR) in MIP images, and contrast-to-noise ratio in three-dimensional image volumes. A body shift of less than 1 mm occurred in 96.2% of cases. Analysis of variance revealed no significant influence of image registration and body shift on image quality (p > 0.05). In conclusion, standard contrast-enhanced carotid MRA usually requires no image registration to improve image quality and is generally robust against any naturally occurring body shift. (orig.)

  19. Protocol for a national blood transfusion data warehouse from donor to recipient

    Science.gov (United States)

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-01-01

    Introduction Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Study design and methods Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Applications Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. Conclusions The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor–recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. PMID:27491665

  20. Choice of optimal replacement of equipment in the warehouse of the company

    Science.gov (United States)

    Baeva, Silvia; Tomov, Kiril

    2016-12-01

    The aim of this paper is to find an optimal replacement of equipment in the warehouse of the company. Real data for the machines are used and are processed with appropriate software. The mathematical model is made for this problem as problem for the shortest path from graph theory. The solution is made by Bellman's principle.

  1. An order batching algorithm for wave picking in a parallel-aisle warehouse

    NARCIS (Netherlands)

    Gademann, A.J.R.M.; Berg, van den J.P.; Hoff, van der H.H.

    2001-01-01

    In this paper we address the problem of batching orders in a parallel-aisle warehouse, with the objective to minimize the maximum lead time of any of the batches. This is a typical objective for a wave picking operation. Many heuristics have been suggested to solve order batching problems. We

  2. Using Fuzzy Linguistic Representations to Provide Explanatory Semantics for Data Warehouses

    NARCIS (Netherlands)

    Feng, L.; Dillon, Tharam S.

    A data warehouse integrates large amounts of extracted and summarized data from multiple sources for direct querying and analysis. While it provides decision makers with easy access to such historical and aggregate data, the real meaning of the data has been ignored. For example, "whether a total

  3. 19 CFR 19.35 - Establishment of duty-free stores (Class 9 warehouses).

    Science.gov (United States)

    2010-04-01

    ... SECURITY; DEPARTMENT OF THE TREASURY CUSTOMS WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE... and/or internal revenue taxes (where applicable) have not been paid. Except insofar as the provisions... paragraph (b) of this section means an area in close proximity to an actual exit for departing from the...

  4. Data Warehouse for Professional Skills Required on the IT Labor Market

    Directory of Open Access Journals (Sweden)

    Cristian GEORGESCU

    2012-11-01

    Full Text Available This paper represents a research regarding informatics graduates professional level adjustment to specific requirements of the IT labor market. It uses techniques and models for data warehouse technology to allow a comparative analysis between the supply competencies and the skills demand on the IT labor market.

  5. Microsoft Enterprise Consortium: A Resource for Teaching Data Warehouse, Business Intelligence and Database Management Systems

    Science.gov (United States)

    Kreie, Jennifer; Hashemi, Shohreh

    2012-01-01

    Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…

  6. [Establishment of data warehouse of needling and moxibustion literature based on data mining].

    Science.gov (United States)

    Wang, Jian-Ling; Li, Ren-Ling; Jia, Chun-Sheng

    2012-02-01

    In order to explore the efficacy specificity and valuable rules of clinical application of needling and moxibustion methods in a large quantity of information from literature, a data warehouse needs being established. On the basis of the original databases of red-hot needle therapy and hydro-acupuncture therapy, and the newly-established databases of acupoint catgut embedding therapy, acupoint application therapy, etc., and in accordance with the characteristics of different types of needling-moxibustion literature information, databases on different subjects were established first. These subject databases constitute a general "literature data warehouse on needling moxibustion methods" composing of multi-subjects and multiple dimensions so as to discover useful regularities about clinical treatment and trials collected in the literature by using data mining techniques. In the present paper, the authors introduce the design of the data warehouse, determination of subjects, establishment of subject relations, application of the administration platform, and application of data. This data warehouse will provide a standard data representation mode, enlarge data attributes and create extensive data links among literature information in the network, and may bring us with considerable convenience and profits in clinical application decision making and scientific research about needling-moxibustion techniques.

  7. Locating spare part warehouses using the concept of gradual coverage - A case study

    DEFF Research Database (Denmark)

    Hansen, Klaus Reinholdt Nyhuus; Grunow, Martin

    2009-01-01

    for MAN Diesel SE is presented, where gradual coverage has been used for locating warehouses for spare parts. In particular it is described, how coverage decay functions are found, which identifies customers’ reaction to the offered ’speed of delivery’ and ’total order cost’. With these functions, demand...

  8. Determining The Optimal Order Picking Batch Size In Single Aisle Warehouses

    NARCIS (Netherlands)

    T. Le-Duc (Tho); M.B.M. de Koster (René)

    2002-01-01

    textabstractThis work aims at investigating the influence of picking batch size to average time in system of orders in a one-aisle warehouse under the assumption that order arrivals follow a Poisson process and items are uniformly distributed over the aisle's length. We model this problem as an

  9. Calculations of received dose for different points in the enrichment uranium oxide warehouse at 4%

    International Nuclear Information System (INIS)

    Alonso V, G.

    1990-06-01

    In order to verifying that the received dose so much inside as outside of the warehouse of enriched uranium dioxide to 4% it doesn't represent risk to the personnel, the modelling of this and the corresponding calculations for the extreme case of dose at contact are made. (Author)

  10. Protocol for a national blood transfusion data warehouse from donor to recipient.

    Science.gov (United States)

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-08-04

    Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor-recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. BI Reporting, Data Warehouse Systems, and Beyond. CDS Spotlight Report. Research Bulletin

    Science.gov (United States)

    Lang, Leah; Pirani, Judith A.

    2014-01-01

    This Spotlight focuses on data from the 2013 Core Data Service [CDS] to better understand how higher education institutions approach business intelligence (BI) reporting and data warehouse systems (see the Sidebar for definitions). Information provided for this Spotlight was derived from Module 8 of CDS, which contains several questions regarding…

  12. 76 FR 13973 - United States Warehouse Act; Processed Agricultural Products Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ... security of goods in the care and custody of the licensee. The personnel conducting the examinations will..., Warehouse Operations Program Manager, FSA, United States Department of Agriculture, Mail Stop 0553, 1400... continuing compliance with the standards of approval and operation. FSA will conduct examinations of licensed...

  13. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  14. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2005-01-01

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  15. Estimation of warehouse throughput in freight transport demand model for the Netherlands

    NARCIS (Netherlands)

    Davydenko, I.; Tavasszy, L.

    2013-01-01

    This paper presents an extension of the classical four-step freight modeling framework with a logistics chain model. Modeling logistics at the regional level establishes a link between trade flow and transport flow, allows the warehouse and distribution center locations and throughput volumes to be

  16. Managing warehouse efficiency and worker discomfort through enhanced storage assignment decisions

    NARCIS (Netherlands)

    Larco, José Antonio; De Koster, René; Roodbergen, Kees Jan; Dul, Jan

    2017-01-01

    Humans are at the heart of crucial processes in warehouses. Besides the common economic goal of minimising cycle times, we therefore add in this paper the human well-being goal of minimising workers' discomfort in the context of order picking. We propose amethodology for identifying the most

  17. 78 FR 77662 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-12-24

    ...,500 square feet and would include a 360,000 square feet active bulk warehouse and a 5,500 square feet... parking lot (approximately 295,000 square feet) and new laydown area (approximately 240,000 square feet.... The laydown area would be constructed in an unimproved, irregularly shaped open lot that is...

  18. Warehouse location and freight attraction in the greater El Paso region.

    Science.gov (United States)

    2013-12-01

    This project analyzes the current and future warehouse and distribution center locations along the El Paso-Juarez regions in the U.S.-Mexico border. This research seeks has developed a comprehensive database to aid in decision support process for ide...

  19. Application genetic algorithms for load management in refrigerated warehouses with wind power penetration

    DEFF Research Database (Denmark)

    Zong, Yi; Cronin, Tom; Gehrke, Oliver

    2009-01-01

    Wind energy is produced at random times, whereas the energy consumption pattern shows distinct demand peaks during day-time and low levels during the night. The use of a refrigerated warehouse as a giant battery for wind energy is a new possibility that is being studied for wind energy integratio...

  20. 19 CFR 19.4 - CBP and proprietor responsibility and supervision over warehouses.

    Science.gov (United States)

    2010-04-01

    ... or their authorized agent. (4) Records maintenance—(i) Maintenance. The proprietor shall: (A... proprietor shall maintain the warehouse facility in a safe and sanitary condition and establish procedures... seals. (7) Storage conditions. Merchandise in the bonded area shall be stored in a safe and sanitary...

  1. Combining Data Warehouse and Data Mining Techniques for Web Log Analysis

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach; Jespersen, Søren; Thorhauge, Jesper

    2008-01-01

    a number of approaches thatcombine data warehousing and data mining techniques in order to analyze Web logs.After introducing the well-known click and session data warehouse (DW) schemas,the chapter presents the subsession schema, which allows fast queries on sequences...

  2. Modelling of Data Warehouse on Food Distribution Center and Reserves in the Ministry of Agriculture

    Directory of Open Access Journals (Sweden)

    Edi Purnomo Putra

    2015-09-01

    Full Text Available The purpose of this study is to perform database’s planning that supports Prototype Modeling Data Warehouse in the Ministry of Agriculture, especially in the Distribution Center and Reserves in the field of distribution, reserve and price. With the prototype of Data Warehouse, the process of data analysis anddecision-making process by the top management will be easier and more accurate. Research’s method used was data collection and design method. Data warehouse’s design method was done by using Kimball’s nine stepsmethodology. Database design was done by using the ERD (Entity Relationship Diagram and activity diagram. The data used for the analysis was obtained from an interview with the head of Distribution, Reserve and Food Price. The results obtained through the analysis incorporated into the Data Warehouse Prototype have been designed to support decision-making. To conclude, Prototype Data Warehouse facilitates the analysis of data, the searching of history data and decision-making by the top management.

  3. Metodología crisp para la implementación Data Warehouse

    Directory of Open Access Journals (Sweden)

    Octavio José Salcedo

    2010-06-01

    Full Text Available Currently, the generation of crystal clear reports, concise and above all based on true corporate information is a fundamental element in decision making, because this imminent need arises data warehouse as an essential resource for conducting the process, primarily founded on the philosophy using the concept OLAP and EIS and DSS for the completion of reports. Within the processes carried out for construction of the data warehouse is mainly involving the extraction, processing and handling Information for further definition of the metadata which in turn are used to define the data warehouse as an integrated system. The trend towards pointing BI, is to the dissemination of information both management and to all who need it from different dimensions and levels associated in order to obtainconsolidated or detailed reports to facilitate the synthesis of certain business process that directly impact the decision-making, which at last is the same purpose of the data warehouse. To carry out the implementation of this process is necessary to have an appropriate methodology, so that the project was designed under the structure ofinternational standards, which are the foundation for obtaining excellent results on project implementation.

  4. Pengembangan Data Warehouse Menggunakan Pendekatan Data-Driven untuk Membantu Pengelolaan SDM

    Directory of Open Access Journals (Sweden)

    Mujiono Mujiono

    2016-01-01

    Full Text Available The basis of bureaucratic reform is the reform of human resources management. One supporting factor is the development of an employee database. To support the management of human resources required including data warehouse and business intelligent tools. The data warehouse is an integrated concept of reliable data storage to provide support to all the needs of the data analysis. In this study developed a data warehouse using the data-driven approach to the source data comes from SIMPEG, SAPK and electronic presence. Data warehouses are designed using the nine steps methodology and unified modeling language (UML notation. Extract transform load (ETL is done by using Pentaho Data Integration by applying transformation maps. Furthermore, to help human resource management, the system is built to perform online analytical processing (OLAP to facilitate web-based information. In this study generated BI application development framework with Model-View-Controller (MVC architecture and OLAP operations are built using the dynamic query generation, PivotTable, and HighChart to present information about PNS, CPNS, Retirement, Kenpa and Presence

  5. ThaleMine: A Warehouse for Arabidopsis Data Integration and Discovery.

    Science.gov (United States)

    Krishnakumar, Vivek; Contrino, Sergio; Cheng, Chia-Yi; Belyaeva, Irina; Ferlanti, Erik S; Miller, Jason R; Vaughn, Matthew W; Micklem, Gos; Town, Christopher D; Chan, Agnes P

    2017-01-01

    ThaleMine (https://apps.araport.org/thalemine/) is a comprehensive data warehouse that integrates a wide array of genomic information of the model plant Arabidopsis thaliana. The data collection currently includes the latest structural and functional annotation from the Araport11 update, the Col-0 genome sequence, RNA-seq and array expression, co-expression, protein interactions, homologs, pathways, publications, alleles, germplasm and phenotypes. The data are collected from a wide variety of public resources. Users can browse gene-specific data through Gene Report pages, identify and create gene lists based on experiments or indexed keywords, and run GO enrichment analysis to investigate the biological significance of selected gene sets. Developed by the Arabidopsis Information Portal project (Araport, https://www.araport.org/), ThaleMine uses the InterMine software framework, which builds well-structured data, and provides powerful data query and analysis functionality. The warehoused data can be accessed by users via graphical interfaces, as well as programmatically via web-services. Here we describe recent developments in ThaleMine including new features and extensions, and discuss future improvements. InterMine has been broadly adopted by the model organism research community including nematode, rat, mouse, zebrafish, budding yeast, the modENCODE project, as well as being used for human data. ThaleMine is the first InterMine developed for a plant model. As additional new plant InterMines are developed by the legume and other plant research communities, the potential of cross-organism integrative data analysis will be further enabled. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Using Electronic Health Records to Build an Ophthalmologic Data Warehouse and Visualize Patients' Data.

    Science.gov (United States)

    Kortüm, Karsten U; Müller, Michael; Kern, Christoph; Babenko, Alexander; Mayer, Wolfgang J; Kampik, Anselm; Kreutzer, Thomas C; Priglinger, Siegfried; Hirneiss, Christoph

    2017-06-01

    To develop a near-real-time data warehouse (DW) in an academic ophthalmologic center to gain scientific use of increasing digital data from electronic medical records (EMR) and diagnostic devices. Database development. Specific macular clinic user interfaces within the institutional hospital information system were created. Orders for imaging modalities were sent by an EMR-linked picture-archiving and communications system to the respective devices. All data of 325 767 patients since 2002 were gathered in a DW running on an SQL database. A data discovery tool was developed. An exemplary search for patients with age-related macular degeneration, performed cataract surgery, and at least 10 intravitreal (excluding bevacizumab) injections was conducted. Data related to those patients (3 142 204 diagnoses [including diagnoses from other fields of medicine], 720 721 procedures [eg, surgery], and 45 416 intravitreal injections) were stored, including 81 274 optical coherence tomography measurements. A web-based browsing tool was successfully developed for data visualization and filtering data by several linked criteria, for example, minimum number of intravitreal injections of a specific drug and visual acuity interval. The exemplary search identified 450 patients with 516 eyes meeting all criteria. A DW was successfully implemented in an ophthalmologic academic environment to support and facilitate research by using increasing EMR and measurement data. The identification of eligible patients for studies was simplified. In future, software for decision support can be developed based on the DW and its structured data. The improved classification of diseases and semiautomatic validation of data via machine learning are warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  8. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  9. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  10. Innovative IT system for material management in warehouses

    Science.gov (United States)

    Papoutsidakis, Michael; Sigala, Maria; Simeonaki, Eleni; Tseles, Dimitrios

    2017-09-01

    Nowadays through the rapid development of technology in all areas there is a constant effort to introduce technological solutions in everyday life with emphasis on materials management information systems (Enterprise Resource Planning). During the last few years the variety of these systems has been increased for small business or for SMEs as well as for larger companies and industries. In the field of material management and main management operations with automated processes, ERP applications have only recently begun to make their appearance. In this paper will be presented the development of a system for automated material storage process in a system built through specific roles that will manage materials using an integrated barcode scanner. In addition we will analyse and describe the operation and modules of other systems that have been created for the same usage. The aim of this project is to create a prototype application that will be innovative with a flexible nature that will give solutions, with low cost and it will be user friendly. This application will allow quick and proper materials management for storage. The expected result is that the application can be used by smart devices in android environment and computers without an external barcode scanner, making the application accessible to the buyer at low cost.

  11. In-class use of clickers and clicker tests improve learning and enable instant feedback and retests via automated grading

    Science.gov (United States)

    Burnham, Nancy A.; Kadam, Snehalata V.; DeSilva, Erin

    2017-11-01

    An audience response system (‘clickers’) was gradually incorporated into introductory physics courses at Worcester Polytechnic Institute during the years 2011-14. Clickers were used in lectures, as a means of preparing for labs, and for collection of exam data and grading. Average student grades were 13.5% greater, as measured by comparing exam results with a previous year. Student acceptance of clickers was high, ranging from 66% to 95%, and grading time for exams was markedly reduced, from a full day to a few hours for approximately 150 students. The streamlined grading allowed for a second test on the same material for the students who failed the first one. These improvements have the immediate effects of engagement, learning, and efficiency, and ideally, they will also provide an environment in which more students will succeed in college and their careers.

  12. Automated grain extraction and classification by combining improved region growing segmentation and shape descriptors in electromagnetic mill classification system

    Science.gov (United States)

    Budzan, Sebastian

    2018-04-01

    In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.

  13. Selecting automation for the clinical chemistry laboratory.

    Science.gov (United States)

    Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr

    2007-07-01

    Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.

  14. Cone beam CT with zonal filters for simultaneous dose reduction, improved target contrast and automated set-up in radiotherapy

    International Nuclear Information System (INIS)

    Moore, C J; Marchant, T E; Amer, A M

    2006-01-01

    Cone beam CT (CBCT) using a zonal filter is introduced. The aims are reduced concomitant imaging dose to the patient, simultaneous control of body scatter for improved image quality in the tumour target zone and preserved set-up detail for radiotherapy. Aluminium transmission diaphragms added to the CBCT x-ray tube of the Elekta Synergy TM linear accelerator produced an unattenuated beam for a central 'target zone' and a partially attenuated beam for an outer 'set-up zone'. Imaging doses and contrast noise ratios (CNR) were measured in a test phantom for transmission diaphragms 12 and 24 mm thick, for 5 and 10 cm long target zones. The effect on automatic registration of zonal CBCT to conventional CT was assessed relative to full-field and lead-collimated images of an anthropomorphic phantom. Doses along the axis of rotation were reduced by up to 50% in both target and set-up zones, and weighted dose (two thirds surface dose plus one third central dose) was reduced by 10-20% for a 10 cm long target zone. CNR increased by up to 15% in zonally filtered CBCT images compared to full-field images. Automatic image registration remained as robust as that with full-field images and was superior to CBCT coned down using lead-collimation. Zonal CBCT significantly reduces imaging dose and is expected to benefit radiotherapy through improved target contrast, required to assess target coverage, and wide-field edge detail, needed for robust automatic measurement of patient set-up error

  15. xMSanalyzer: automated pipeline for improved feature detection and downstream analysis of large-scale, non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Uppal Karan

    2013-01-01

    Full Text Available Abstract Background Detection of low abundance metabolites is important for de novo mapping of metabolic pathways related to diet, microbiome or environmental exposures. Multiple algorithms are available to extract m/z features from liquid chromatography-mass spectral data in a conservative manner, which tends to preclude detection of low abundance chemicals and chemicals found in small subsets of samples. The present study provides software to enhance such algorithms for feature detection, quality assessment, and annotation. Results xMSanalyzer is a set of utilities for automated processing of metabolomics data. The utilites can be classified into four main modules to: 1 improve feature detection for replicate analyses by systematic re-extraction with multiple parameter settings and data merger to optimize the balance between sensitivity and reliability, 2 evaluate sample quality and feature consistency, 3 detect feature overlap between datasets, and 4 characterize high-resolution m/z matches to small molecule metabolites and biological pathways using multiple chemical databases. The package was tested with plasma samples and shown to more than double the number of features extracted while improving quantitative reliability of detection. MS/MS analysis of a random subset of peaks that were exclusively detected using xMSanalyzer confirmed that the optimization scheme improves detection of real metabolites. Conclusions xMSanalyzer is a package of utilities for data extraction, quality control assessment, detection of overlapping and unique metabolites in multiple datasets, and batch annotation of metabolites. The program was designed to integrate with existing packages such as apLCMS and XCMS, but the framework can also be used to enhance data extraction for other LC/MS data software.

  16. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    Science.gov (United States)

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  17. An Improved Dispatching Method (a-HPDB for Automated Material Handling System with Active Rolling Belt for 450 mm Wafer Fabrication

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2017-07-01

    Full Text Available The semiconductor industry is facing the transition from 300 mm to 450 mm wafer fabrication. Due to the increased size and weight, 450 mm wafers will pose unprecedented challenges on semiconductor wafer fabrication. To better handle and transport 450 mm wafers, an advanced Automated Material Handling System (AMHS is definitely required. Though conveyor-based AMHS is expected to be suitable for 450 mm wafer fabrication, still it faces two main problems, traffic-jam problem and lot-prioritization. To address the two problems, in this research we have proposed an improved dispatching method, termed Heuristic Preemptive Dispatching Method using Activated Roller Belt (a-HPDB. We have developed some effective rules for the a-HPDB based on Activated Roller Belt (ARB. In addition, we have conducted experiments to investigate its effectiveness. Compared with the HPDB and R-HPD, two dispatching rules proposed in previous studies, our experimental results showed the a-HPDB had a better performance in terms of average lot delivery time (ALDT. For hot lots and normal lots, the a-HPDB had advantages of 4.14% and 8.92% over the HPDB and advantages of 4.89% and 8.52% over R-HPD, respectively.

  18. The Microsoft Data Warehouse Toolkit With SQL Server 2008 R2 and the Microsoft Business Intelligence Toolset

    CERN Document Server

    Mundy, Joy; Kimball, Ralph

    2011-01-01

    Best practices and invaluable advice from world-renowned data warehouse expertsIn this book, leading data warehouse experts from the Kimball Group share best practices for using the upcoming "Business Intelligence release" of SQL Server, referred to as SQL Server 2008 R2. In this new edition, the authors explain how SQL Server 2008 R2 provides a collection of powerful new tools that extend the power of its BI toolset to Excel and SharePoint users and they show how to use SQL Server to build a successful data warehouse that supports the business intelligence requirements that are common to most

  19. Criticality calculation of the nuclear material warehouse of the ININ; Calculo de criticidad del almacen del material nuclear del ININ

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, T.; Angeles, A.; Flores C, J., E-mail: teodoro.garcia@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2013-10-15

    In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)

  20. Digital coal mine integrated automation system based on Controlnet

    Energy Technology Data Exchange (ETDEWEB)

    Jin-yun Chen; Shen Zhang; Wei-ran Zuo [China University of Mining and Technology, Xuzhou (China). School of Chemical Engineering and Technology

    2007-06-15

    A three-layer model for digital communication in a mine is proposed. Two basic platforms are discussed: a uniform transmission network and a uniform data warehouse. An actual, ControlNet based, transmission network platform suitable for the Jining No.3 coal mine in China is presented. This network is an information superhighway intended to integrate all existing and new automation subsystems. Its standard interface can be used with future subsystems. The network, data structure and management decision-making all employ this uniform hardware and software. This effectively avoids the problems of system and information islands seen in traditional mine-automation systems. The construction of the network provides a stable foundation for digital communication in the Jining No.3 coal mine. 9 refs., 5 figs.