WorldWideScience

Sample records for automated database mediation

  1. Toward designing for trust in database automation

    Energy Technology Data Exchange (ETDEWEB)

    Duez, P. P.; Jamieson, G. A. [Cognitive Engineering Laboratory, Univ. of Toronto, 5 King' s College Rd., Toronto, Ont. M5S 3G8 (Canada)

    2006-07-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  2. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Duez, P. P.; Jamieson, G. A.

    2006-01-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  3. Automation bias: a systematic review of frequency, effect mediators, and mitigators.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2012-01-01

    Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.

  4. Automating Relational Database Design for Microcomputer Users.

    Science.gov (United States)

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  5. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1983-01-01

    Database management techniques are applied to automating the setup of operating parameters of a heavy-ion accelerator used in nuclear physics experiments. Data files consist of ion-beam attributes, the interconnection assignments of the numerous power supplies and magnetic elements that steer the ions' path through the system, the data values that represent the electrical currents supplied by the power supplies, as well as the positions of motors and status of mechanical actuators. The database is relational and permits searching on ranges of any subset of the ion-beam attributes. A file selected from the database is used by the control software to replicate the ion beam conditions by adjusting the physical elements in a continuous manner

  6. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    Science.gov (United States)

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  7. Automated chromatographic laccase-mediator-system activity assay.

    Science.gov (United States)

    Anders, Nico; Schelden, Maximilian; Roth, Simon; Spiess, Antje C

    2017-08-01

    To study the interaction of laccases, mediators, and substrates in laccase-mediator systems (LMS), an on-line measurement was developed using high performance anion exchange chromatography equipped with a CarboPac™ PA 100 column coupled to pulsed amperometric detection (HPAEC-PAD). The developed method was optimized for overall chromatographic run time (45 to 120 min) and automated sample drawing. As an example, the Trametes versicolor laccase induced oxidation of 1-(3,4-dimethoxyphenyl)-2-(2-methoxyphenoxy)-1,3-dihydroxypropane (adlerol) using 1-hydroxybenzotriazole (HBT) as mediator was measured and analyzed on-line. Since the Au electrode of the PAD detects only hydroxyl group containing substances with a limit of detection being in the milligram/liter range, not all products are measureable. Therefore, this method was applied for the quantification of adlerol, and-based on adlerol conversion-for the quantification of the LMS activity at a specific T. versicolor laccase/HBT ratio. The automated chromatographic activity assay allowed for a defined reaction start of all laccase-mediator-system reactions mixtures, and the LMS reaction progress was automatically monitored for 48 h. The automatization enabled an integrated monitoring overnight and over-weekend and minimized all manual errors such as pipetting of solutions accordingly. The activity of the LMS based on adlerol consumption was determined to 0.47 U/mg protein for a laccase/mediator ratio of 1.75 U laccase/g HBT. In the future, the automated method will allow for a fast screening of combinations of laccases, mediators, and substrates which are efficient for lignin modification. In particular, it allows for a fast and easy quantification of the oxidizing activity of an LMS on a lignin-related substrate which is not covered by typical colorimetric laccase assays. ᅟ.

  8. 78 FR 2363 - Notification of Deletion of a System of Records; Automated Trust Funds Database

    Science.gov (United States)

    2013-01-11

    ... [Docket No. APHIS-2012-0041] Notification of Deletion of a System of Records; Automated Trust Funds Database AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice of deletion of a system... establishing the Automated Trust Funds (ATF) database system of records. The Federal Information Security...

  9. Functional integration of automated system databases by means of artificial intelligence

    Science.gov (United States)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  10. Architecture of Automated Database Tuning Using SGA Parameters

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2012-05-01

    Full Text Available Business Data always growth from kilo byte, mega byte, giga byte, tera byte, peta byte, and so far. There is no way to avoid this increasing rate of data till business still running. Because of this issue, database tuning be critical part of a information system. Tuning a database in a cost-effective manner is a growing challenge. The total cost of ownership (TCO of information technology needs to be significantly reduced by minimizing people costs. In fact, mistakes in operations and administration of information systems are the single most reasons for system outage and unacceptable performance [3]. One way of addressing the challenge of total cost of ownership is by making information systems more self-managing. A particularly difficult piece of the ambitious vision of making database systems self-managing is the automation of database performance tuning. In this paper, we will explain the progress made thus far on this important problem. Specifically, we will propose the architecture and Algorithm for this problem.

  11. AutoLabDB: a substantial open source database schema to support a high-throughput automated laboratory.

    Science.gov (United States)

    Sparkes, Andrew; Clare, Amanda

    2012-05-15

    Modern automated laboratories need substantial data management solutions to both store and make accessible the details of the experiments they perform. To be useful, a modern Laboratory Information Management System (LIMS) should be flexible and easily extensible to support evolving laboratory requirements, and should be based on the solid foundations of a robust, well-designed database. We have developed such a database schema to support an automated laboratory that performs experiments in systems biology and high-throughput screening. We describe the design of the database schema (AutoLabDB), detailing the main features and describing why we believe it will be relevant to LIMS manufacturers or custom builders. This database has been developed to support two large automated Robot Scientist systems over the last 5 years, where it has been used as the basis of an LIMS that helps to manage both the laboratory and all the experiment data produced.

  12. Selecting a Relational Database Management System for Library Automation Systems.

    Science.gov (United States)

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  13. Automated knowledge base development from CAD/CAE databases

    Science.gov (United States)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  14. Current Comparative Table (CCT) automates customized searches of dynamic biological databases.

    Science.gov (United States)

    Landsteiner, Benjamin R; Olson, Michael R; Rutherford, Robert

    2005-07-01

    The Current Comparative Table (CCT) software program enables working biologists to automate customized bioinformatics searches, typically of remote sequence or HMM (hidden Markov model) databases. CCT currently supports BLAST, hmmpfam and other programs useful for gene and ortholog identification. The software is web based, has a BioPerl core and can be used remotely via a browser or locally on Mac OS X or Linux machines. CCT is particularly useful to scientists who study large sets of molecules in today's evolving information landscape because it color-codes all result files by age and highlights even tiny changes in sequence or annotation. By empowering non-bioinformaticians to automate custom searches and examine current results in context at a glance, CCT allows a remote database submission in the evening to influence the next morning's bench experiment. A demonstration of CCT is available at http://orb.public.stolaf.edu/CCTdemo and the open source software is freely available from http://sourceforge.net/projects/orb-cct.

  15. Automated testing of arrhythmia monitors using annotated databases.

    Science.gov (United States)

    Elghazzawi, Z; Murray, W; Porter, M; Ezekiel, E; Goodall, M; Staats, S; Geheb, F

    1992-01-01

    Arrhythmia-algorithm performance is typically tested using the AHA and MIT/BIH databases. The tools for this test are simulation software programs. While these simulations provide rapid results, they neglect hardware and software effects in the monitor. To provide a more accurate measure of performance in the actual monitor, a system has been developed for automated arrhythmia testing. The testing system incorporates an IBM-compatible personal computer, a digital-to-analog converter, an RS232 board, a patient-simulator interface to the monitor, and a multi-tasking software package for data conversion and communication with the monitor. This system "plays" patient data files into the monitor and saves beat classifications in detection files. Tests were performed using the MIT/BIH and AHA databases. Statistics were generated by comparing the detection files with the annotation files. These statistics were marginally different from those that resulted from the simulation. Differences were then examined. As expected, the differences were related to monitor hardware effects.

  16. An automated database case definition for serious bleeding related to oral anticoagulant use.

    Science.gov (United States)

    Cunningham, Andrew; Stein, C Michael; Chung, Cecilia P; Daugherty, James R; Smalley, Walter E; Ray, Wayne A

    2011-06-01

    Bleeding complications are a serious adverse effect of medications that prevent abnormal blood clotting. To facilitate epidemiologic investigations of bleeding complications, we developed and validated an automated database case definition for bleeding-related hospitalizations. The case definition utilized information from an in-progress retrospective cohort study of warfarin-related bleeding in Tennessee Medicaid enrollees 30 years of age or older. It identified inpatient stays during the study period of January 1990 to December 2005 with diagnoses and/or procedures that indicated a current episode of bleeding. The definition was validated by medical record review for a sample of 236 hospitalizations. We reviewed 186 hospitalizations that had medical records with sufficient information for adjudication. Of these, 165 (89%, 95%CI: 83-92%) were clinically confirmed bleeding-related hospitalizations. An additional 19 hospitalizations (10%, 7-15%) were adjudicated as possibly bleeding-related. Of the 165 clinically confirmed bleeding-related hospitalizations, the automated database and clinical definitions had concordant anatomical sites (gastrointestinal, cerebral, genitourinary, other) for 163 (99%, 96-100%). For those hospitalizations with sufficient information to distinguish between upper/lower gastrointestinal bleeding, the concordance was 89% (76-96%) for upper gastrointestinal sites and 91% (77-97%) for lower gastrointestinal sites. A case definition for bleeding-related hospitalizations suitable for automated databases had a positive predictive value of between 89% and 99% and could distinguish specific bleeding sites. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Archiving, ordering and searching: search engines, algorithms, databases and deep mediatization

    DEFF Research Database (Denmark)

    Andersen, Jack

    2018-01-01

    This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic o...... reviewed recent trends in mediatization research, the argument is discussed and unfolded in-between the material and social constructivist-phenomenological interpretations of mediatization. In conclusion, it is discussed how deep this form of mediatization can be taken to be.......This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic...

  18. OAP- OFFICE AUTOMATION PILOT GRAPHICS DATABASE SYSTEM

    Science.gov (United States)

    Ackerson, T.

    1994-01-01

    The Office Automation Pilot (OAP) Graphics Database system offers the IBM PC user assistance in producing a wide variety of graphs and charts. OAP uses a convenient database system, called a chartbase, for creating and maintaining data associated with the charts, and twelve different graphics packages are available to the OAP user. Each of the graphics capabilities is accessed in a similar manner. The user chooses creation, revision, or chartbase/slide show maintenance options from an initial menu. The user may then enter or modify data displayed on a graphic chart. The cursor moves through the chart in a "circular" fashion to facilitate data entries and changes. Various "help" functions and on-screen instructions are available to aid the user. The user data is used to generate the graphics portion of the chart. Completed charts may be displayed in monotone or color, printed, plotted, or stored in the chartbase on the IBM PC. Once completed, the charts may be put in a vector format and plotted for color viewgraphs. The twelve graphics capabilities are divided into three groups: Forms, Structured Charts, and Block Diagrams. There are eight Forms available: 1) Bar/Line Charts, 2) Pie Charts, 3) Milestone Charts, 4) Resources Charts, 5) Earned Value Analysis Charts, 6) Progress/Effort Charts, 7) Travel/Training Charts, and 8) Trend Analysis Charts. There are three Structured Charts available: 1) Bullet Charts, 2) Organization Charts, and 3) Work Breakdown Structure (WBS) Charts. The Block Diagram available is an N x N Chart. Each graphics capability supports a chartbase. The OAP graphics database system provides the IBM PC user with an effective means of managing data which is best interpreted as a graphic display. The OAP graphics database system is written in IBM PASCAL 2.0 and assembler for interactive execution on an IBM PC or XT with at least 384K of memory, and a color graphics adapter and monitor. Printed charts require an Epson, IBM, OKIDATA, or HP Laser

  19. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  20. Personal Databases: Of Filing Cabinets and Idiosyncrasy [and] Library Automation: An Overview of the Market.

    Science.gov (United States)

    Molholt, Pat; McDonald, David R.

    1989-01-01

    The first of two articles describes how a team effort by computing centers and academic libraries could aid faculty in the organization of their personal databases. The second provides an overview of the academic library automation market, identifying vendors active in the market and trends of recent years. (CLB)

  1. Automated granularity to integrate digital information: the "Antarctic Treaty Searchable Database" case study

    Directory of Open Access Journals (Sweden)

    Paul Arthur Berkman

    2006-06-01

    Full Text Available Access to information is necessary, but not sufficient in our digital era. The challenge is to objectively integrate digital resources based on user-defined objectives for the purpose of discovering information relationships that facilitate interpretations and decision making. The Antarctic Treaty Searchable Database (http://aspire.nvi.net, which is in its sixth edition, provides an example of digital integration based on the automated generation of information granules that can be dynamically combined to reveal objective relationships within and between digital information resources. This case study further demonstrates that automated granularity and dynamic integration can be accomplished simply by utilizing the inherent structure of the digital information resources. Such information integration is relevant to library and archival programs that require long-term preservation of authentic digital resources.

  2. Database-driven web interface automating gyrokinetic simulations for validation

    Science.gov (United States)

    Ernst, D. R.

    2010-11-01

    We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.

  3. Automated tools for cross-referencing large databases. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Clapp, N E; Green, P L; Bell, D [and others

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  4. Automation of plasma-process fultext bibliography databases. An on-line data-collection, data-mining and data-input system

    International Nuclear Information System (INIS)

    Suzuki, Manabu; Pichl, Lukas; Murakami, Izumi; Kato, Takako; Sasaki, Akira

    2006-01-01

    Searching for relevant data, information retrieval, data extraction and data input are time- and resource-consuming activities in most data centers. Here we develop a Linux system automating the process in case of bibliography, abstract and fulltext databases. The present system is an open-source free-software low-cost solution that connects the target and provider databases in cyberspace through various web publishing formats. The abstract/fulltext relevance assessment is interfaced to external software modules. (author)

  5. Managing expectations: assessment of chemistry databases generated by automated extraction of chemical structures from patents.

    Science.gov (United States)

    Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna

    2015-12-01

    First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered

  6. Automation of a Beckman liquid scintillation counter for data capture and data-base management

    International Nuclear Information System (INIS)

    Neil, W.; Irwin, T.J.; Yang, J.J.

    1988-01-01

    A software package for the automation of a Beckman LS9000 liquid scintillation counter is presented. The package provides effective on-line data capture (with a Perkin Elmer 3230 32-bit minicomputer), data-base management, audit trail and archiving facilities. Key features of the package are rapid and flexible data entry, background subtraction, half-life correction, ability to queue several sample sets pending scintillation counting, and formatted report generation. A brief discussion is given on the development of customized data processing programs. (author)

  7. Distress and worry as mediators in the relationship between psychosocial risks and upper body musculoskeletal complaints in highly automated manufacturing.

    Science.gov (United States)

    Wixted, Fiona; Shevlin, Mark; O'Sullivan, Leonard W

    2018-03-15

    As a result of changes in manufacturing including an upward trend in automation and the advent of the fourth industrial revolution, the requirement for supervisory monitoring and consequently, cognitive demand has increased in automated manufacturing. The incidence of musculoskeletal disorders has also increased in the manufacturing sector. A model was developed based on survey data to test if distress and worry mediate the relationship between psychosocial factors (job control, cognitive demand, social isolation and skill discretion), stress states and symptoms of upper body musculoskeletal disorders in highly automated manufacturing companies (n = 235). These constructs facilitated the development of a statistically significant model (RMSEA 0.057, TLI 0.924, CFI 0.935). Cognitive demand was shown to be related to higher distress in employees, and distress to a higher incidence of self-reported shoulder and lower back symptoms. The mediation model incorporating stress states (distress, worry) as mediators is a novel approach in linking psychosocial risks to musculoskeletal disorders. Practitioners' Summary With little requirement for physical work in many modern automated manufacturing workplaces, there is often minimal management focus on Work-Related Musculoskeletal Disorders (WRMSDs) as important occupational health problems. Our model provides evidence that psychosocial factors are important risk factors in symptoms of WRMSD and should be managed.

  8. MannDB – A microbial database of automated protein sequence analyses and evidence integration for protein characterization

    Directory of Open Access Journals (Sweden)

    Kuczmarski Thomas A

    2006-10-01

    Full Text Available Abstract Background MannDB was created to meet a need for rapid, comprehensive automated protein sequence analyses to support selection of proteins suitable as targets for driving the development of reagents for pathogen or protein toxin detection. Because a large number of open-source tools were needed, it was necessary to produce a software system to scale the computations for whole-proteome analysis. Thus, we built a fully automated system for executing software tools and for storage, integration, and display of automated protein sequence analysis and annotation data. Description MannDB is a relational database that organizes data resulting from fully automated, high-throughput protein-sequence analyses using open-source tools. Types of analyses provided include predictions of cleavage, chemical properties, classification, features, functional assignment, post-translational modifications, motifs, antigenicity, and secondary structure. Proteomes (lists of hypothetical and known proteins are downloaded and parsed from Genbank and then inserted into MannDB, and annotations from SwissProt are downloaded when identifiers are found in the Genbank entry or when identical sequences are identified. Currently 36 open-source tools are run against MannDB protein sequences either on local systems or by means of batch submission to external servers. In addition, BLAST against protein entries in MvirDB, our database of microbial virulence factors, is performed. A web client browser enables viewing of computational results and downloaded annotations, and a query tool enables structured and free-text search capabilities. When available, links to external databases, including MvirDB, are provided. MannDB contains whole-proteome analyses for at least one representative organism from each category of biological threat organism listed by APHIS, CDC, HHS, NIAID, USDA, USFDA, and WHO. Conclusion MannDB comprises a large number of genomes and comprehensive protein

  9. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  10. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  11. Developing an automated database for monitoring ultrasound- and computed tomography-guided procedure complications and diagnostic yield.

    Science.gov (United States)

    Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M

    2014-04-01

    Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.

  12. Toward automating the database design process

    International Nuclear Information System (INIS)

    Asprey, P.L.

    1979-01-01

    One organization's approach to designing complex, interrelated databases is described. The problems encountered and the techniques developed are discussed. A set of software tools to aid the designer and to produce an initial database design directly is presented. 5 figures

  13. The Co-regulation Data Harvester: Automating gene annotation starting from a transcriptome database

    Science.gov (United States)

    Tsypin, Lev M.; Turkewitz, Aaron P.

    Identifying co-regulated genes provides a useful approach for defining pathway-specific machinery in an organism. To be efficient, this approach relies on thorough genome annotation, a process much slower than genome sequencing per se. Tetrahymena thermophila, a unicellular eukaryote, has been a useful model organism and has a fully sequenced but sparsely annotated genome. One important resource for studying this organism has been an online transcriptomic database. We have developed an automated approach to gene annotation in the context of transcriptome data in T. thermophila, called the Co-regulation Data Harvester (CDH). Beginning with a gene of interest, the CDH identifies co-regulated genes by accessing the Tetrahymena transcriptome database. It then identifies their closely related genes (orthologs) in other organisms by using reciprocal BLAST searches. Finally, it collates the annotations of those orthologs' functions, which provides the user with information to help predict the cellular role of the initial query. The CDH, which is freely available, represents a powerful new tool for analyzing cell biological pathways in Tetrahymena. Moreover, to the extent that genes and pathways are conserved between organisms, the inferences obtained via the CDH should be relevant, and can be explored, in many other systems.

  14. Behavior change through automated e-mails: mediation analysis of self-help strategy use for depressive symptoms.

    Science.gov (United States)

    Morgan, Amy J; Mackinnon, Andrew J; Jorm, Anthony F

    2013-02-01

    To evaluate whether automated e-mails promoting effective self-help strategies for depressive symptoms were effective in changing self-help behavior, and whether this improved depression outcomes. 568 adults with sub-threshold depression participated in a randomized controlled trial and provided complete data. A series of 12 e-mails promoting the use of evidence-based self-help strategies was compared with e-mails providing non-directive depression information. Depression symptoms were assessed with the Patient Health Questionnaire depression scale (PHQ-9) and use of self-help strategies was assessed at baseline and post-intervention. We hypothesized that those receiving the self-help e-mails would increase their use of evidence-based self-help and this would be associated with improvements in depression. Mediation analyses were conducted using a non-parametric bootstrapping procedure. Total use of the self-help strategies promoted in the e-mails significantly mediated the effect of the intervention on depressive symptoms (B = -0.75, SE = 0.16, 95% CI: -1.06 to -0.48). The direct effect of the intervention on depressive symptoms was much smaller and not significant when the mediation path was included. The majority of the individual strategies also had a significant indirect effect on depressive symptoms. In adults with sub-threshold depression, automated e-mails based on behavior change principles can successfully increase use of self-help strategies, leading to a reduction in depressive symptoms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Automated Liquibase Generator And ValidatorALGV

    Directory of Open Access Journals (Sweden)

    Manik Jain

    2015-08-01

    Full Text Available Abstract This paper presents an automation tool namely ALGV Automated Liquibase Generator and Validator for the automated generation and verification of liquibase scripts. Liquibase is one of the most efficient ways of applying and persisting changes to a database schema. Since its invention by Nathan Voxland 1 it has become de facto standard for database change management. The advantages of using liquibase scripts over traditional sql queries ranges from version control to reusing the same scripts over multiple database platforms. Irrespective of its advantages manual creation of liquibase scripts takes a lot of effort and sometimes is error-prone. ALGV helps to reduce the time consuming liquibase script generation manual typing efforts possible error occurrence and manual verification process and time by 75. Automating the liquibase generation process also helps to remove the burden of recollecting specific tags to be used for a particular change. Moreover developers can concentrate on the business logic and business data rather than wasting their precious efforts in writing files.

  16. Term Relevance Feedback and Mediated Database Searching: Implications for Information Retrieval Practice and Systems Design.

    Science.gov (United States)

    Spink, Amanda

    1995-01-01

    This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)

  17. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    Science.gov (United States)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development

  18. Redefining the Practice of Peer Review Through Intelligent Automation Part 1: Creation of a Standardized Methodology and Referenceable Database.

    Science.gov (United States)

    Reiner, Bruce I

    2017-10-01

    Conventional peer review practice is compromised by a number of well-documented biases, which in turn limit standard of care analysis, which is fundamental to determination of medical malpractice. In addition to these intrinsic biases, other existing deficiencies exist in current peer review including the lack of standardization, objectivity, retrospective practice, and automation. An alternative model to address these deficiencies would be one which is completely blinded to the peer reviewer, requires independent reporting from both parties, utilizes automated data mining techniques for neutral and objective report analysis, and provides data reconciliation for resolution of finding-specific report differences. If properly implemented, this peer review model could result in creation of a standardized referenceable peer review database which could further assist in customizable education, technology refinement, and implementation of real-time context and user-specific decision support.

  19. Automated Groundwater Screening

    International Nuclear Information System (INIS)

    Taylor, Glenn A.; Collard, Leonard B.

    2005-01-01

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application

  20. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  1. Automating data citation: the eagle-i experience.

    Science.gov (United States)

    Alawini, Abdussalam; Chen, Leshang; Davidson, Susan B; Da Silva, Natan Portilho; Silvello, Gianmaria

    2017-06-01

    Data citation is of growing concern for owners of curated databases, who wish to give credit to the contributors and curators responsible for portions of the dataset and enable the data retrieved by a query to be later examined. While several databases specify how data should be cited, they leave it to users to manually construct the citations and do not generate them automatically. We report our experiences in automating data citation for an RDF dataset called eagle-i, and discuss how to generalize this to a citation framework that can work across a variety of different types of databases (e.g. relational, XML, and RDF). We also describe how a database administrator would use this framework to automate citation for a particular dataset.

  2. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  3. Positive predictive value of automated database records for diabetic ketoacidosis (DKA in children and youth exposed to antipsychotic drugs or control medications: a tennessee medicaid study

    Directory of Open Access Journals (Sweden)

    Bobo William V

    2011-11-01

    Full Text Available Abstract Background Diabetic ketoacidosis (DKA is a potentially life-threatening complication of treatment with some atypical antipsychotic drugs in children and youth. Because drug-associated DKA is rare, large automated health outcomes databases may be a valuable data source for conducting pharmacoepidemiologic studies of DKA associated with exposure to individual antipsychotic drugs. However, no validated computer case definition of DKA exists. We sought to assess the positive predictive value (PPV of a computer case definition to detect incident cases of DKA, using automated records of Tennessee Medicaid as the data source and medical record confirmation as a "gold standard." Methods The computer case definition of DKA was developed from a retrospective cohort study of antipsychotic-related type 2 diabetes mellitus (1996-2007 in Tennessee Medicaid enrollees, aged 6-24 years. Thirty potential cases with any DKA diagnosis (ICD-9 250.1, ICD-10 E1x.1 were identified from inpatient encounter claims. Medical records were reviewed to determine if they met the clinical definition of DKA. Results Of 30 potential cases, 27 (90% were successfully abstracted and adjudicated. Of these, 24 cases were confirmed by medical record review (PPV 88.9%, 95% CI 71.9 to 96.1%. Three non-confirmed cases presented acutely with severe hyperglycemia, but had no evidence of acidosis. Conclusions Diabetic ketoacidosis in children and youth can be identified in a computerized Medicaid database using our case definition, which could be useful for automated database studies in which drug-associated DKA is the outcome of interest.

  4. Pollution Prevention Successes Database (P2SDb) user guide

    International Nuclear Information System (INIS)

    1995-07-01

    When Pollution Prevention Opportunity Assessments (P2OAs) were launched at the Hanford Site during the summer of 1994, the first comment received from those using them expressed the desire for a method to report assessments electronically. As a temporary measure, macros were developed for use on word processing systems, but a more formal database was obviously needed. Additionally, increased DOE and Washington state reporting requirements for pollution prevention suggested that a database system would streamline the reporting process. The Pollution Prevention Group of Westinghouse Hanford Company (WHC) contracted with the Data Automation Engineering Department from ICF Kaiser Hanford Company (ICFKH) to develop the system. The scope was to develop a database that will track P2OAs conducted by the facilities and contractors at the Hanford Site. It will also track pollution prevention accomplishments that are not the result of P2OAs and document a portion of the Process Waste Assessments conducted in the past. To accommodate the above criteria, yet complete the system in a timely manner, the Pollution Prevention Successes Database (P2SDb) is being implemented in three phases. The first phase will automate the worksheets to provide both input and output of the data associated with the worksheets. The second phase will automate standard summary reports and ad hoc reports. The third phase will provide automated searching of the database to facilitate the sharing of pollution prevention experiences among various users. This User's Guide addresses only the Phase 1 system

  5. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  6. Managing Automation: A Process, Not a Project.

    Science.gov (United States)

    Hoffmann, Ellen

    1988-01-01

    Discussion of issues in management of library automation includes: (1) hardware, including systems growth and contracts; (2) software changes, vendor relations, local systems, and microcomputer software; (3) item and authority databases; (4) automation and library staff, organizational structure, and managing change; and (5) environmental issues,…

  7. "First generation" automated DNA sequencing technology.

    Science.gov (United States)

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  8. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  9. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  10. Relational Database Design in Information Science Education.

    Science.gov (United States)

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  11. RADARS, a bioinformatics solution that automates proteome mass spectral analysis, optimises protein identification, and archives data in a relational database.

    Science.gov (United States)

    Field, Helen I; Fenyö, David; Beavis, Ronald C

    2002-01-01

    RADARS, a rapid, automated, data archiving and retrieval software system for high-throughput proteomic mass spectral data processing and storage, is described. The majority of mass spectrometer data files are compatible with RADARS, for consistent processing. The system automatically takes unprocessed data files, identifies proteins via in silico database searching, then stores the processed data and search results in a relational database suitable for customized reporting. The system is robust, used in 24/7 operation, accessible to multiple users of an intranet through a web browser, may be monitored by Virtual Private Network, and is secure. RADARS is scalable for use on one or many computers, and is suited to multiple processor systems. It can incorporate any local database in FASTA format, and can search protein and DNA databases online. A key feature is a suite of visualisation tools (many available gratis), allowing facile manipulation of spectra, by hand annotation, reanalysis, and access to all procedures. We also described the use of Sonar MS/MS, a novel, rapid search engine requiring 40 MB RAM per process for searches against a genomic or EST database translated in all six reading frames. RADARS reduces the cost of analysis by its efficient algorithms: Sonar MS/MS can identifiy proteins without accurate knowledge of the parent ion mass and without protein tags. Statistical scoring methods provide close-to-expert accuracy and brings robust data analysis to the non-expert user.

  12. Automated reported system using structured data entry: Application to prostate US

    International Nuclear Information System (INIS)

    Kim, Bo Hyun; Paik, Chul Hwa; Lee, Won Yong

    2001-01-01

    To improve efficacy in producing and searching the radiological reported of prostate US in daily practice and clinical research by developing an automated reporting system using structured data entry system. The report database was established with appropriate fields. A structured data entry form for prostate US was created. The rules for automated transformation from the entered data a text report have been decide. Two programmers coded the programs according to the rules. We have successful developed an automated reporting system for prostate US using structured data entry. Patients. deg Φs demographic information, the order information, and the contents of the main body and conclusion of the radiological report were included as individual fields in the database. The report contents were input by selecting corresponding fields in a structured data entry entry form, which has transformed into a text report. The automated reporting system using structured data entry is an efficient way to establish radiological report database and could be successfully applied to prostate US. If its utility can be extended to other US examinations, it will become a useful tool for both radiological reporting and database management.

  13. CAR2 - Czech Database of Car Speech

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1999-12-01

    Full Text Available This paper presents new Czech language two-channel (stereo speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  14. CAR2 - Czech Database of Car Speech

    OpenAIRE

    Pollak, P.; Vopicka, J.; Hanzl, V.; Sovka, Pavel

    1999-01-01

    This paper presents new Czech language two-channel (stereo) speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  15. METHODOLOGICAL PROBLEMS AND WAYS OF CREATION OF THE AIRCRAFT EQUIPMENT TEST AUTOMATED MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    Vladimir Michailovich Vetoshkin

    2017-01-01

    Full Text Available The development of new and modernization of existing aviation equipment specimens of different classes are ac- companied and completed by the complex process of ground and flight tests. This phase of aviation equipment life cycle is implemented by means of organizational and technical systems - running centers. The latter include various proving grounds, measuring complex and systems, aircraft, ships, security and flight control offices, information processing laborato- ries and many other elements. The system analysis results of development challenges of the automated control systems of aviation equipment tests operations are presented. The automated control systems are in essence an automated data bank. The key role of development of flight tests automated control system in the process of creation of the automated control sys- tems of aviation equipment tests operations is substantiated. The way of the mobile modular measuring complexes integra- tion and the need for national methodologies and technological standards for database systems design concepts are grounded. Database system, as a central element in this scheme, provides collection, storing and updating of values of the elements described above in pace and the required frequency of the controlled object state monitoring. It is database system that pro- vides the supervisory unit with actual data corresponding to specific moments of time, which concern the state processes, assessments of the progress and results of flight experiments, creating the necessary environment for aviation equipment managing and testing as a whole. The basis for development of subsystems of automated control systems of aviation equip- ment tests operations are conceptual design processes of the respective database system, the implementation effectiveness of which largely determines the level of success and ability to develop the systems being created. Introduced conclusions and suggestions can be used in the

  16. Building a genome database using an object-oriented approach.

    Science.gov (United States)

    Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud

    2002-01-01

    GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.

  17. The NASA Fireball Network Database

    Science.gov (United States)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  18. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  19. Set of information technologies and their role in automation of agricultural production

    Directory of Open Access Journals (Sweden)

    V. V. Al’t

    2018-01-01

    Full Text Available The modern enterprises of agrarian and industrial complex are characterized by the high level of automation of technological processes. The technological development level conformto 5th and 6th technology revolutions. The automatic and automated technologies in crop production and livestock production use data of internet technologies, Global Positioning Satellite survey and observations, mashine and tractor aggregates automated operating. The models nucleus and row of information models of agricultural objects were designed on the basis of information streams systematization. The analysis of results of simulation of biological objects, cenosises, ecosystems, agro cenosises and agroecosystems showed that the most acceptable type of model is the systemically determined dynamic model of potentially effective type. The Internet-oriented database of innovative development of institutes of an agrarian profile is designed. It contains the information about sorts, machines, mechanization means, electrification and technologies in crop production, livestock production, forage production, feed processing, crop protection, biotechnologies, mechanization, veterinary science and agricultural production processing. The database is positioned as the subject-oriented, retrieval database in web space. The list of indices to which the created architecture of the database corresponds is defined. More than 20 various databases of agricultural purpose which are used in educational process and production are created. These databases are useful to agricultural producers and also organizers of agricultural production, scientists, teachers and students. Information on key indicators of innovative products and institutes – developers of innovative solutions is provided in a basis.

  20. planning for the automation of the university of dar es salaam library

    African Journals Online (AJOL)

    The paper examines the planning process for the automation of the University of Dar es Salaam Library. The planning phases described include the preparation phase, planning for implementation and database construction. The major issues during the preparation phase are the discussion on the context of automation, ...

  1. High-throughput STR analysis for DNA database using direct PCR.

    Science.gov (United States)

    Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan

    2013-07-01

    Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.

  2. Evaluating Air Force Civil Engineer's Current Automated Information Systems

    National Research Council Canada - National Science Library

    Phillips, Edward

    2002-01-01

    ...) to the Automated Civil Engineer System (ACES). This research focused on users perceptions of both database and data importance to determine if significant differences existed between various user sub-groups...

  3. Automated bar coding of air samples at Hanford (ABCASH)

    International Nuclear Information System (INIS)

    Troyer, G.L.; Brayton, D.D.; McNeece, S.G.

    1992-10-01

    This article describes the basis, main features and benefits of an automated system for tracking and reporting radioactive air particulate samples. The system was developed due to recognized need for improving the quality and integrity of air sample data related to personnel and environmental protection. The data capture, storage, and retrieval of air sample data are described. The automation aspect of the associated and data input eliminates a large potential for human error. The system utilizes personal computers, handheld computers, a commercial personal computer database package, commercial programming languages, and complete documentation to satisfy the system's automation objective

  4. High-throughput mouse genotyping using robotics automation.

    Science.gov (United States)

    Linask, Kaari L; Lo, Cecilia W

    2005-02-01

    The use of mouse models is rapidly expanding in biomedical research. This has dictated the need for the rapid genotyping of mutant mouse colonies for more efficient utilization of animal holding space. We have established a high-throughput protocol for mouse genotyping using two robotics workstations: a liquid-handling robot to assemble PCR and a microfluidics electrophoresis robot for PCR product analysis. This dual-robotics setup incurs lower start-up costs than a fully automated system while still minimizing human intervention. Essential to this automation scheme is the construction of a database containing customized scripts for programming the robotics workstations. Using these scripts and the robotics systems, multiple combinations of genotyping reactions can be assembled simultaneously, allowing even complex genotyping data to be generated rapidly with consistency and accuracy. A detailed protocol, database, scripts, and additional background information are available at http://dir.nhlbi.nih.gov/labs/ldb-chd/autogene/.

  5. Automation trust and attention allocation in multitasking workspace.

    Science.gov (United States)

    Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke

    2018-07-01

    Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.

  6. A Projection of Automated Book Production Control

    Directory of Open Access Journals (Sweden)

    Mario Barisic

    2006-12-01

    Full Text Available The paper elaborates on the recommendation of systematic introducing of XML technologies as a standard and integral factor in publishing and graphic business activities and as a further improvement of the existing PostScript graphic production platform. Procedures are proposed for applying norm setting in respect to production processes through related connections organized databases under XML technology in a hierarchical way, as well as a book production norm setting system. The proposal for work processes automation in the domain of printing business control is elaborated under the CIP4-JDF automating system. Operation results are used as guidelines for setting the elements of automated business operations in the book production domain, with integrated elements of new technologies, compatible with global trends.

  7. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  8. Automated Classification of Seedlings Using Computer Vision

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Christiansen, Peter

    The objective of this project is to investigate the possibilities of recognizing plant species at multiple growth stages based on RGB images. Plants and leaves are initially segmented from a database through a partly automated procedure providing samples of 2438 plants and 4767 leaves distributed...

  9. Robust Automated Image Co-Registration of Optical Multi-Sensor Time Series Data: Database Generation for Multi-Temporal Landslide Detection

    Directory of Open Access Journals (Sweden)

    Robert Behling

    2014-03-01

    Full Text Available Reliable multi-temporal landslide detection over longer periods of time requires multi-sensor time series data characterized by high internal geometric stability, as well as high relative and absolute accuracy. For this purpose, a new methodology for fully automated co-registration has been developed allowing efficient and robust spatial alignment of standard orthorectified data products originating from a multitude of optical satellite remote sensing data of varying spatial resolution. Correlation-based co-registration uses world-wide available terrain corrected Landsat Level 1T time series data as the spatial reference, ensuring global applicability. The developed approach has been applied to a multi-sensor time series of 592 remote sensing datasets covering an approximately 12,000 km2 area in Southern Kyrgyzstan (Central Asia strongly affected by landslides. The database contains images acquired during the last 26 years by Landsat (ETM, ASTER, SPOT and RapidEye sensors. Analysis of the spatial shifts obtained from co-registration has revealed sensor-specific alignments ranging between 5 m and more than 400 m. Overall accuracy assessment of these alignments has resulted in a high relative image-to-image accuracy of 17 m (RMSE and a high absolute accuracy of 23 m (RMSE for the whole co-registered database, making it suitable for multi-temporal landslide detection at a regional scale in Southern Kyrgyzstan.

  10. Mass Spectra-Based Framework for Automated Structural Elucidation of Metabolome Data to Explore Phytochemical Diversity

    Science.gov (United States)

    Matsuda, Fumio; Nakabayashi, Ryo; Sawada, Yuji; Suzuki, Makoto; Hirai, Masami Y.; Kanaya, Shigehiko; Saito, Kazuki

    2011-01-01

    A novel framework for automated elucidation of metabolite structures in liquid chromatography–mass spectrometer metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method. PMID:22645535

  11. Mass spectra-based framework for automated structural elucidation of metabolome data to explore phytochemical diversity

    Directory of Open Access Journals (Sweden)

    Fumio eMatsuda

    2011-08-01

    Full Text Available A novel framework for automated elucidation of metabolite structures in liquid chromatography-mass spectrometer (LC-MS metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method.

  12. Automated lung nodule classification following automated nodule detection on CT: A serial approach

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Altman, Michael B.; Wilkie, Joel; Sone, Shusuke; Li, Feng; Doi, Kunio; Roy, Arunabha S.

    2003-01-01

    We have evaluated the performance of an automated classifier applied to the task of differentiating malignant and benign lung nodules in low-dose helical computed tomography (CT) scans acquired as part of a lung cancer screening program. The nodules classified in this manner were initially identified by our automated lung nodule detection method, so that the output of automated lung nodule detection was used as input to automated lung nodule classification. This study begins to narrow the distinction between the 'detection task' and the 'classification task'. Automated lung nodule detection is based on two- and three-dimensional analyses of the CT image data. Gray-level-thresholding techniques are used to identify initial lung nodule candidates, for which morphological and gray-level features are computed. A rule-based approach is applied to reduce the number of nodule candidates that correspond to non-nodules, and the features of remaining candidates are merged through linear discriminant analysis to obtain final detection results. Automated lung nodule classification merges the features of the lung nodule candidates identified by the detection algorithm that correspond to actual nodules through another linear discriminant classifier to distinguish between malignant and benign nodules. The automated classification method was applied to the computerized detection results obtained from a database of 393 low-dose thoracic CT scans containing 470 confirmed lung nodules (69 malignant and 401 benign nodules). Receiver operating characteristic (ROC) analysis was used to evaluate the ability of the classifier to differentiate between nodule candidates that correspond to malignant nodules and nodule candidates that correspond to benign lesions. The area under the ROC curve for this classification task attained a value of 0.79 during a leave-one-out evaluation

  13. Completion of autobuilt protein models using a database of protein fragments

    International Nuclear Information System (INIS)

    Cowtan, Kevin

    2012-01-01

    Two developments in the process of automated protein model building in the Buccaneer software are described: the use of a database of protein fragments in improving the model completeness and the assembly of disconnected chain fragments into complete molecules. Two developments in the process of automated protein model building in the Buccaneer software are presented. A general-purpose library for protein fragments of arbitrary size is described, with a highly optimized search method allowing the use of a larger database than in previous work. The problem of assembling an autobuilt model into complete chains is discussed. This involves the assembly of disconnected chain fragments into complete molecules and the use of the database of protein fragments in improving the model completeness. Assembly of fragments into molecules is a standard step in existing model-building software, but the methods have not received detailed discussion in the literature

  14. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  15. Using artificial intelligence to automate remittance processing.

    Science.gov (United States)

    Adams, W T; Snow, G M; Helmick, P M

    1998-06-01

    The consolidated business office of the Allegheny Health Education Research Foundation (AHERF), a large integrated healthcare system based in Pittsburgh, Pennsylvania, sought to improve its cash-related business office activities by implementing an automated remittance processing system that uses artificial intelligence. The goal was to create a completely automated system whereby all monies it processed would be tracked, automatically posted, analyzed, monitored, controlled, and reconciled through a central database. Using a phased approach, the automated payment system has become the central repository for all of the remittances for seven of the hospitals in the AHERF system and has allowed for the complete integration of these hospitals' existing billing systems, document imaging system, and intranet, as well as the new automated payment posting, and electronic cash tracking and reconciling systems. For such new technology, which is designed to bring about major change, factors contributing to the project's success were adequate planning, clearly articulated objectives, marketing, end-user acceptance, and post-implementation plan revision.

  16. Challenges in converting an interviewer-administered food probe database to self-administration in the National Cancer Institute Automated Self-administered 24-Hour Recall (ASA24).

    Science.gov (United States)

    Zimmerman, Thea Palmer; Hull, Stephen G; McNutt, Suzanne; Mittl, Beth; Islam, Noemi; Guenther, Patricia M; Thompson, Frances E; Potischman, Nancy A; Subar, Amy F

    2009-12-01

    The National Cancer Institute (NCI) is developing an automated, self-administered 24-hour dietary recall (ASA24) application to collect and code dietary intake data. The goal of the ASA24 development is to create a web-based dietary interview based on the US Department of Agriculture (USDA) Automated Multiple Pass Method (AMPM) instrument currently used in the National Health and Nutrition Examination Survey (NHANES). The ASA24 food list, detail probes, and portion probes were drawn from the AMPM instrument; portion-size pictures from Baylor College of Medicine's Food Intake Recording Software System (FIRSSt) were added; and the food code/portion code assignments were linked to the USDA Food and Nutrient Database for Dietary Studies (FNDDS). The requirements that the interview be self-administered and fully auto-coded presented several challenges as the AMPM probes and responses were linked with the FNDDS food codes and portion pictures. This linking was accomplished through a "food pathway," or the sequence of steps that leads from a respondent's initial food selection, through the AMPM probes and portion pictures, to the point at which a food code and gram weight portion size are assigned. The ASA24 interview database that accomplishes this contains more than 1,100 food probes and more than 2 million food pathways and will include about 10,000 pictures of individual foods depicting up to 8 portion sizes per food. The ASA24 will make the administration of multiple days of recalls in large-scale studies economical and feasible.

  17. Automated sampling and data processing derived from biomimetic membranes

    DEFF Research Database (Denmark)

    Perry, Mark; Vissing, Thomas; Boesen, P.

    2009-01-01

    data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition...... applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet (TM)) for efficient data management......Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new...

  18. Shot Automation for the National Ignition Facility

    International Nuclear Information System (INIS)

    Lagin, L J; Bettenhausen, R C; Beeler, R G; Bowers, G A; Carey, R.; Casavant, D.D.; Cline, B.D.; Demaret, R.D.; Domyancic, D.M.; Elko, S.D.; Fisher, J.M.; Hermann, M.R.; Krammen, J.E.; Kohut, T.R.; Marshall, C.D.; Mathisen, D.G.; Ludwigsen, A.P.; Patterson, Jr. R.W.; Sanchez, R.J.; Stout, E.A.; Van Arsdall, P.J.; Van Wonterghem, B.M.

    2005-01-01

    A shot automation framework has been developed and deployed during the past year to automate shots performed on the National Ignition Facility (NIF) using the Integrated Computer Control System This framework automates a 4-8 hour shot sequence, that includes inputting shot goals from a physics model, set up of the laser and diagnostics, automatic alignment of laser beams and verification of status. This sequence consists of set of preparatory verification shots, leading to amplified system shots using a 4-minute countdown, triggering during the last 2 seconds using a high-precision timing system, followed by post-shot analysis and archiving. The framework provides for a flexible, model-based execution driven of scriptable automation called macro steps. The framework is driven by high-level shot director software that provides a restricted set of shot life cycle state transitions to 25 collaboration supervisors that automate 8-laser beams (bundles) and a common set of shared resources. Each collaboration supervisor commands approximately 10 subsystem shot supervisors that perform automated control and status verification. Collaboration supervisors translate shot life cycle state commands from the shot director into sequences of ''macro steps'' to be distributed to each of its shot supervisors. Each Shot supervisor maintains order of macro steps for each subsystem and supports collaboration between macro steps. They also manage failure, restarts and rejoining into the shot cycle (if necessary) and manage auto/manual macro step execution and collaborations between other collaboration supervisors. Shot supervisors execute macro step shot functions commanded by collaboration supervisors. Each macro step has database-driven verification phases and a scripted perform phase. This provides for a highly flexible methodology for performing a variety of NIF shot types. Database tables define the order of work and dependencies (workflow) of macro steps to be performed for a

  19. Friction ridge skin - Automated Fingerprint Identification System (AFIS)

    NARCIS (Netherlands)

    Meuwly, Didier

    2013-01-01

    This contribution describes the development and the forensic use of automated fingerprint identification systems (AFISs). AFISs were initially developed in order to overcome the limitations of the paper-based fingerprint collections, by digitizing the ten-print cards in computerized databases and to

  20. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  1. Automated [inservice testing] IST program

    International Nuclear Information System (INIS)

    Wright, W.M.

    1990-01-01

    There are two methods used to manage a Section XI program: Manual and Automated. The manual method usually consists of hand written records of test results and scheduling requirements. This method while initially lower in cost, results in problems later on in the life of a plant as data continues to accumulate. Automation allows instant access to forty years of test results. Due to the lower cost and higher performance of todays' personal computers, an automated method via a computer program provides an excellent method for managing the vast amount of data that accumulates over the forty year life of a plant. Through the use of a computer, special functions involving this data are available, which through a manual method would not be practical. This paper will describe some of the advantages in using a computer program to manage the Section XI 1ST program. The ISTBASE consists of program code and numerous databases. The source code is written and complied in CLIPPER (tm) language. Graphing routines are performed by dGE (tm) graphics library. Graphs are displayed in EGA form. Since it was estimated that the total complied code, would exceed 640K of ram, overlays through the use of modular programming were used to facilitate the DOS restrictions of 640K ram. The use of overlays still require the user to gain access to ISTBASE through the PASSWORD module. The database files are designed to be compatible with dBASE III+ (tm) data structure. This allows transfer of data between ISTBASE and other database managers/applications. A math co-processor is utilized to speed up calculations on graphs and other mathematical calculations. Program code and data files require a hard disk drive with at least 28 Meg capacity. While ISTBASE will execute on a 8088 based computer, an 80286 computer with a 12 MHz operating speed should be considered the minimum system configuration

  2. CCD characterization and measurements automation

    International Nuclear Information System (INIS)

    Kotov, I.V.; Frank, J.; Kotov, A.I.; Kubanek, P.; O'Connor, P.; Prouza, M.; Radeka, V.; Takacs, P.

    2012-01-01

    Modern mosaic cameras have grown both in size and in number of sensors. The required volume of sensor testing and characterization has grown accordingly. For camera projects as large as the LSST, test automation becomes a necessity. A CCD testing and characterization laboratory was built and is in operation for the LSST project. Characterization of LSST study contract sensors has been performed. The characterization process and its automation are discussed, and results are presented. Our system automatically acquires images, populates a database with metadata information, and runs express analysis. This approach is illustrated on 55 Fe data analysis. 55 Fe data are used to measure gain, charge transfer efficiency and charge diffusion. Examples of express analysis results are presented and discussed.

  3. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  4. Automating Groundwater Sampling At Hanford, The Next Step

    International Nuclear Information System (INIS)

    Connell, C.W.; Conley, S.F.; Hildebrand, R.D.; Cunningham, D.E.

    2010-01-01

    Historically, the groundwater monitoring activities at the Department of Energy's Hanford Site in southeastern Washington State have been very 'people intensive.' Approximately 1500 wells are sampled each year by field personnel or 'samplers.' These individuals have been issued pre-printed forms showing information about the well(s) for a particular sampling evolution. This information is taken from 2 official electronic databases: the Hanford Well information System (HWIS) and the Hanford Environmental Information System (HEIS). The samplers used these hardcopy forms to document the groundwater samples and well water-levels. After recording the entries in the field, the samplers turned the forms in at the end of the day and other personnel posted the collected information onto a spreadsheet that was then printed and included in a log book. The log book was then used to make manual entries of the new information into the software application(s) for the HEIS and HWIS databases. A pilot project for automating this extremely tedious process was lauched in 2008. Initially, the automation was focused on water-level measurements. Now, the effort is being extended to automate the meta-data associated with collecting groundwater samples. The project allowed electronic forms produced in the field by samplers to be used in a work flow process where the data is transferred to the database and electronic form is filed in managed records - thus eliminating manually completed forms. Elimating the manual forms and streamlining the data entry not only improved the accuracy of the information recorded, but also enhanced the efficiency and sampling capacity of field office personnel.

  5. Detecting errors and anomalies in computerized materials control and accountability databases

    International Nuclear Information System (INIS)

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-01-01

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results

  6. Good control practices underlined by an on-line fuzzy control database

    Directory of Open Access Journals (Sweden)

    Alonso, M. V.

    1994-04-01

    Full Text Available In the olive oil trade, control systems that automate extraction processes, cutting production costs and increasing processing capacity without losing quality, are always desirable. The database structure of an on-line fuzzy control of centrifugation systems and the algorithms used to attain the best control conditions are analysed. Good control practices are suggested to obtain virgin olive oil of prime quality.

    In the olive oil trade, control systems that automate extraction processes, cutting production costs and increasing processing capacity without losing quality, are always desirable. The database structure of an on-line fuzzy control of centrifugation systems and the algorithms used to attain the best control conditions are analysed. Good control practices are suggested to obtain virgin olive oil of prime quality.

  7. Automated System Marketplace 1993. Part I: Focus on Minicomputers.

    Science.gov (United States)

    Bridge, Frank R.

    1993-01-01

    The first part of the annual automation marketplace survey examines minicomputer systems in libraries. Highlights include vendor consolidation and acquisitions; system interconnection; networked databases; products related to the Americans with Disabilities Act; multimedia; vendor installations worldwide; academic versus public library…

  8. Automated Energy Distribution and Reliability System Status Report

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.; Perry, S.

    2007-10-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  9. From document to database: modernizing requirements management

    International Nuclear Information System (INIS)

    Giajnorio, J.; Hamilton, S.

    2007-01-01

    The creation, communication, and management of design requirements are central to the successful completion of any large engineering project, both technically and commercially. Design requirements in the Canadian nuclear industry are typically numbered lists in multiple documents created using word processing software. As an alternative, GE Nuclear Products implemented a central requirements management database for a major project at Bruce Power. The database configured the off-the-shelf software product, Telelogic Doors, to GE's requirements structure. This paper describes the advantages realized by this scheme. Examples include traceability from customer requirements through to test procedures, concurrent engineering, and automated change history. (author)

  10. Proceedings. Fourth international symposium on mine mechanisation and automation

    Energy Technology Data Exchange (ETDEWEB)

    Gurgenci, H.; Hood, M. [eds.

    1997-12-31

    Papers in the first volume are presented under the following session headings: drilling; mining robotics; machine monitoring; mine automation systems; reliability and maintenance; mine automation - communications mechanical excavation of medium-strength rock; and new mining equipment technologies. The second volume covers: mechanical excavation of hard rock; autonomous vehicles; mechanical excavation industry experience; machine guidance; applications of rock mechanics, mine planning management and scheduling; orebody delineation; and safety. Selected papers have been abstracted separately for the IEA Coal Research databases available on CD-ROM and the worldwide web.

  11. The STRING database in 2017

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Morris, John H; Cook, Helen

    2017-01-01

    A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number of organi......A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number...... of organisms. The associations in STRING include direct (physical) interactions, as well as indirect (functional) interactions, as long as both are specific and biologically meaningful. Apart from collecting and reassessing available experimental data on protein-protein interactions, and importing known...... pathways and protein complexes from curated databases, interaction predictions are derived from the following sources: (i) systematic co-expression analysis, (ii) detection of shared selective signals across genomes, (iii) automated text-mining of the scientific literature and (iv) computational transfer...

  12. Automated spectral and timing analysis of AGNs

    Science.gov (United States)

    Munz, F.; Karas, V.; Guainazzi, M.

    2006-12-01

    % We have developed an autonomous script that helps the user to automate the XMM-Newton data analysis for the purposes of extensive statistical investigations. We test this approach by examining X-ray spectra of bright AGNs pre-selected from the public database. The event lists extracted in this process were studied further by constructing their energy-resolved Fourier power-spectrum density. This analysis combines energy distributions, light-curves, and their power-spectra and it proves useful to assess the variability patterns present is the data. As another example, an automated search was based on the XSPEC package to reveal the emission features in 2-8 keV range.

  13. Cassini Tour Atlas Automated Generation

    Science.gov (United States)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2011-01-01

    During the Cassini spacecraft s cruise phase and nominal mission, the Cassini Science Planning Team developed and maintained an online database of geometric and timing information called the Cassini Tour Atlas. The Tour Atlas consisted of several hundreds of megabytes of EVENTS mission planning software outputs, tables, plots, and images used by mission scientists for observation planning. Each time the nominal mission trajectory was altered or tweaked, a new Tour Atlas had to be regenerated manually. In the early phases of Cassini s Equinox Mission planning, an a priori estimate suggested that mission tour designers would develop approximately 30 candidate tours within a short period of time. So that Cassini scientists could properly analyze the science opportunities in each candidate tour quickly and thoroughly so that the optimal series of orbits for science return could be selected, a separate Tour Atlas was required for each trajectory. The task of manually generating the number of trajectory analyses in the allotted time would have been impossible, so the entire task was automated using code written in five different programming languages. This software automates the generation of the Cassini Tour Atlas database. It performs with one UNIX command what previously took a day or two of human labor.

  14. Supervised learning for the automated transcription of spacer classification from spoligotype films

    Directory of Open Access Journals (Sweden)

    Abernethy Neil

    2009-08-01

    Full Text Available Abstract Background Molecular genotyping of bacteria has revolutionized the study of tuberculosis epidemiology, yet these established laboratory techniques typically require subjective and laborious interpretation by trained professionals. In the context of a Tuberculosis Case Contact study in The Gambia we used a reverse hybridization laboratory assay called spoligotype analysis. To facilitate processing of spoligotype images we have developed tools and algorithms to automate the classification and transcription of these data directly to a database while allowing for manual editing. Results Features extracted from each of the 1849 spots on a spoligo film were classified using two supervised learning algorithms. A graphical user interface allows manual editing of the classification, before export to a database. The application was tested on ten films of differing quality and the results of the best classifier were compared to expert manual classification, giving a median correct classification rate of 98.1% (inter quartile range: 97.1% to 99.2%, with an automated processing time of less than 1 minute per film. Conclusion The software implementation offers considerable time savings over manual processing whilst allowing expert editing of the automated classification. The automatic upload of the classification to a database reduces the chances of transcription errors.

  15. Automated biosurveillance data from England and Wales, 1991-2011.

    Science.gov (United States)

    Enki, Doyo G; Noufaily, Angela; Garthwaite, Paul H; Andrews, Nick J; Charlett, André; Lane, Chris; Farrington, C Paddy

    2013-01-01

    Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991-2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity.

  16. Ultrasound automated volume calculation in reproduction and in pregnancy.

    Science.gov (United States)

    Ata, Baris; Tulandi, Togas

    2011-06-01

    To review studies assessing the application of ultrasound automated volume calculation in reproductive medicine. We performed a literature search using the keywords "SonoAVC, sonography-based automated volume calculation, automated ultrasound, 3D ultrasound, antral follicle, follicle volume, follicle monitoring, follicle tracking, in vitro fertilization, controlled ovarian hyperstimulation, embryo volume, embryonic volume, gestational sac, and fetal volume" and conducted the search in PubMed, Medline, EMBASE, and the Cochrane Database of Systematic Reviews. Reference lists of identified reports were manually searched for other relevant publications. Automated volume measurements are in very good agreement with actual volumes of the assessed structures or with other validated measurement methods. The technique seems to provide reliable and highly reproducible results under a variety of conditions. Automated measurements take less time than manual measurements. Ultrasound automated volume calculation is a promising new technology which is already used in daily practice especially for assisted reproduction. Improvements to the technology will undoubtedly render it more effective and increase its use. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  17. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    Science.gov (United States)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  18. A database in ACCESS for assessing vaccine serious adverse events

    Directory of Open Access Journals (Sweden)

    Thomas RE

    2015-04-01

    Full Text Available Roger E Thomas,1 Dave Jackson2,3 1Department of Family Medicine, G012 Health Sciences Centre, University of Calgary Medical School, Calgary, AB, Canada; 2Independent Research Consultant, Calgary, AB, Canada; 3Database Consultant, University of Calgary, Calgary, AB, Canada Purpose: To provide a free flexible database for use by any researcher for assessing reports of adverse events after vaccination. Results: A database was developed in Microsoft ACCESS to assess reports of serious adverse events after yellow fever vaccination using Brighton Collaboration criteria. The database is partly automated (if data panels contain identical data fields the data are automatically also entered into those fields. The purpose is to provide the database free for developers to add additional panels to assess other vaccines. Keywords: serious adverse events after vaccination, database, process to assess vaccine-associated events 

  19. Effects of Automation Types on Air Traffic Controller Situation Awareness and Performance

    Science.gov (United States)

    Sethumadhavan, A.

    2009-01-01

    The Joint Planning and Development Office has proposed the introduction of automated systems to help air traffic controllers handle the increasing volume of air traffic in the next two decades (JPDO, 2007). Because fully automated systems leave operators out of the decision-making loop (e.g., Billings, 1991), it is important to determine the right level and type of automation that will keep air traffic controllers in the loop. This study examined the differences in the situation awareness (SA) and collision detection performance of individuals when they worked with information acquisition, information analysis, decision and action selection and action implementation automation to control air traffic (Parasuraman, Sheridan, & Wickens, 2000). When the automation was unreliable, the time taken to detect an upcoming collision was significantly longer for all the automation types compared with the information acquisition automation. This poor performance following automation failure was mediated by SA, with lower SA yielding poor performance. Thus, the costs associated with automation failure are greater when automation is applied to higher order stages of information processing. Results have practical implications for automation design and development of SA training programs.

  20. Automated extraction of knowledge for model-based diagnostics

    Science.gov (United States)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  1. Brug af en translationel database i en klinisk afdeling

    DEFF Research Database (Denmark)

    Højfeldt, Anne Dirks; Johnsen, Hans E; Bøgsted, Martin

    2010-01-01

    In haematology it is assumed that integrative analysis of global gene expression, protein and cell profiles as well as clinical data will lead to the development of new diagnostic, prognostic and predictive methods. A translational database system registering and combining all data and clinical...... observations about the patient is therefore needed. It is expected that along with automated prediction and prognosis tools, such a database system may have the potential to assist the development of new machine-based diagnostic decision-making processes. Udgivelsesdato: 2010-Jul-12...

  2. Automation of ORIGEN2 calculations for the transuranic waste baseline inventory database using a pre-processor and a post-processor

    International Nuclear Information System (INIS)

    Liscum-Powell, J.

    1997-06-01

    The purpose of the work described in this report was to automate ORIGEN2 calculations for the Waste Isolation Pilot Plant (WIPP) Transuranic Waste Baseline Inventory Database (WTWBID); this was done by developing a pre-processor to generate ORIGEN2 input files from WWBID inventory files and a post-processor to remove excess information from the ORIGEN2 output files. The calculations performed with ORIGEN2 estimate the radioactive decay and buildup of various radionuclides in the waste streams identified in the WTWBID. The resulting radionuclide inventories are needed for performance assessment calculations for the WIPP site. The work resulted in the development of PreORG, which requires interaction with the user to generate ORIGEN2 input files on a site-by-site basis, and PostORG, which processes ORIGEN2 output into more manageable files. Both programs are written in the FORTRAN 77 computer language. After running PreORG, the user will run ORIGEN2 to generate the desired data; upon completion of ORIGEN2 calculations, the user can run PostORG to process the output to make it more manageable. All the programs run on a 386 PC or higher with a math co-processor or a computer platform running under VMS operating system. The pre- and post-processors for ORIGEN2 were generated for use with Rev. 1 data of the WTWBID and can also be used with Rev. 2 and 3 data of the TWBID (Transuranic Waste Baseline Inventory Database)

  3. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  4. How to automatically test and validate your database backup and recovery strategy

    International Nuclear Information System (INIS)

    Gaspar Aparicio, Ruben

    2011-01-01

    The major challenge we solve with this software project is the automated validation of backups sent to tape for Oracle databases. While Oracle Recovery Manager (RMAN) provides tools like 'restore validate', the real and only certain proof is a restore. This initial aim evolved to provide a recovery platform capable to cover more complex user cases, such as validations of backup strategy of Very Large DataBases (VLDB), and schema recoveries to cure logical errors or to provide the kind of database snapshots by means of exports.

  5. Automated Energy Distribution and Reliability System (AEDR): Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.

    2008-07-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  6. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  7. Automated sampling and data processing derived from biomimetic membranes

    Energy Technology Data Exchange (ETDEWEB)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H [Aquaporin A/S, Diplomvej 377, DK-2800 Kgs. Lyngby (Denmark); Boesen, T P [Xefion ApS, Kildegaardsvej 8C, DK-2900 Hellerup (Denmark); Emneus, J, E-mail: Claus.Nielsen@fysik.dtu.d [DTU Nanotech, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark)

    2009-12-15

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  8. Automated Transportation Management System (ATMS) user's manual. Revision 1

    International Nuclear Information System (INIS)

    Smith, P.D.

    1994-01-01

    The Automated Transportation Management System (ATMS) Software User Guide (SUG) constitutes the user procedures for the ATMS System. Information in this document will be used by the user to operate the automated system. It is intended to be used as a reference manual to guide and direct the user(s) through the ATMS software product and its environment. The objectives of ATMS are as follows: to better support the Procurement function with freight rate information; to free Transportation Logistics personnel from routine activities such as the auditing and input of freight billing information; to comply with Headquarters Department of Energy-Inspector General (DOE-IG) audit findings to automate transportation management functions; to reduce the keying of data into the Shipment Mobility Accountability Collection (SMAC) database; and to provide automation for the preparing of Bill of Lading, Declaration of Dangerous Goods, Emergency Response Guide and shipping Labels using HM181 Retrieval of hazardous material table text information

  9. Automated dating of the world’s language families based on lexical similarity

    OpenAIRE

    Holman, E.; Brown, C.; Wichmann, S.; Müller, A.; Velupillai, V.; Hammarström, H.; Sauppe, S.; Jung, H.; Bakker, D.; Brown, P.; Belyaev, O.; Urban, M.; Mailhammer, R.; List, J.; Egorov, D.

    2011-01-01

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Leve...

  10. Improving treatment plan evaluation with automation

    Science.gov (United States)

    Covington, Elizabeth L.; Chen, Xiaoping; Younge, Kelly C.; Lee, Choonik; Matuszak, Martha M.; Kessler, Marc L.; Keranen, Wayne; Acosta, Eduardo; Dougherty, Ashley M.; Filpansick, Stephanie E.

    2016-01-01

    The goal of this work is to evaluate the effectiveness of Plan‐Checker Tool (PCT) which was created to improve first‐time plan quality, reduce patient delays, increase the efficiency of our electronic workflow, and standardize and automate the physics plan review in the treatment planning system (TPS). PCT uses an application programming interface to check and compare data from the TPS and treatment management system (TMS). PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user as part of a plan readiness check for treatment. Prior to and during PCT development, errors identified during the physics review and causes of patient treatment start delays were tracked to prioritize which checks should be automated. Nineteen of 33 checklist items were automated, with data extracted with PCT. There was a 60% reduction in the number of patient delays in the six months after PCT release. PCT was successfully implemented for use on all external beam treatment plans in our clinic. While the number of errors found during the physics check did not decrease, automation of checks increased visibility of errors during the physics check, which led to decreased patient delays. The methods used here can be applied to any TMS and TPS that allows queries of the database. PACS number(s): 87.55.‐x, 87.55.N‐, 87.55.Qr, 87.55.tm, 89.20.Bb PMID:27929478

  11. The factographic database on atomic spectroscopy ''spectr-2'' for information service in the field of thermonuclear and quantum electronic investigations

    International Nuclear Information System (INIS)

    Bugaev, V.Yu.; Pal'chikov, V.G.; Skobelev, I.Yu.; Faenov, A.Ya.

    1990-01-01

    ''Spectr-2'' automated database in an extension of ''Spectr-1'' database developed in VNIIFTRI (USSR) for storage and rapid search of multicharged ions atomic characteristics. The information structure, the interaction of the terminal user and the information accumulation in this database are described. 4 figs

  12. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    Directory of Open Access Journals (Sweden)

    Jordi Vilaplana

    2014-01-01

    Full Text Available Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (reannotation of proteomes, to properly identify both the individual proteins involved in the process(es of interest and their function. It also enables the sets of proteins involved in the process(es in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  13. Database constraints applied to metabolic pathway reconstruction tools.

    Science.gov (United States)

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  14. Ontology-Based Device Descriptions and Device Repository for Building Automation Devices

    Directory of Open Access Journals (Sweden)

    Dibowski Henrik

    2011-01-01

    Full Text Available Device descriptions play an important role in the design and commissioning of modern building automation systems and help reducing the design time and costs. However, all established device descriptions are specialized for certain purposes and suffer from several weaknesses. This hinders a further design automation, which is strongly needed for the more and more complex building automation systems. To overcome these problems, this paper presents novel Ontology-based Device Descriptions (ODDs along with a layered ontology architecture, a specific ontology view approach with virtual properties, a generic access interface, a triple store-based database backend, and a generic search mask GUI with underlying query generation algorithm. It enables a formal, unified, and extensible specification of building automation devices, ensures their comparability, and facilitates a computer-enabled retrieval, selection, and interoperability evaluation, which is essential for an automated design. The scalability of the approach to several ten thousand devices is demonstrated.

  15. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States); Petit, Chad M. [University of Alabama at Birmingham, Department of Biochemistry and Molecular Genetics (United States); Cornilescu, Gabriel; Stark, Jaime L.; Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States)

    2016-06-15

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27–98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  16. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data

    International Nuclear Information System (INIS)

    Lee, Woonghee; Petit, Chad M.; Cornilescu, Gabriel; Stark, Jaime L.; Markley, John L.

    2016-01-01

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27–98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  17. Will the future of knowledge work automation transform personalized medicine?

    Science.gov (United States)

    Naik, Gauri; Bhide, Sanika S

    2014-09-01

    Today, we live in a world of 'information overload' which demands high level of knowledge-based work. However, advances in computer hardware and software have opened possibilities to automate 'routine cognitive tasks' for knowledge processing. Engineering intelligent software systems that can process large data sets using unstructured commands and subtle judgments and have the ability to learn 'on the fly' are a significant step towards automation of knowledge work. The applications of this technology for high throughput genomic analysis, database updating, reporting clinically significant variants, and diagnostic imaging purposes are explored using case studies.

  18. Validation and application of a physics database for fast reactor fuel cycle analysis

    International Nuclear Information System (INIS)

    McKnight, R.D.; Stillman, J.A.; Toppel, B.J.; Khalil, H.S.

    1994-01-01

    An effort has been made to automate the execution of fast reactor fuel cycle analysis, using EBR-II as a demonstration vehicle, and to validate the analysis results for application to the IFR closed fuel cycle demonstration at EBR-II and its fuel cycle facility. This effort has included: (1) the application of the standard ANL depletion codes to perform core-follow analyses for an extensive series of EBR-II runs, (2) incorporation of the EBR-II data into a physics database, (3) development and verification of software to update, maintain and verify the database files, (4) development and validation of fuel cycle models and methodology, (5) development and verification of software which utilizes this physics database to automate the application of the ANL depletion codes, methods and models to perform the core-follow analysis, and (6) validation studies of the ANL depletion codes and of their application in support of anticipated near-term operations in EBR-II and the Fuel Cycle Facility. Results of the validation tests indicate the physics database and associated analysis codes and procedures are adequate to predict required quantities in support of early phases of FCF operations

  19. Automated Stock Control System for Bookshops in Tertiary Institutions

    African Journals Online (AJOL)

    The recent need for automated stock control system for bookshops in tertiary institutions was generated by unequal availability of books and stiff scarcity of books in some areas while in ... This approach provides for faster response times for users because the database is local to each business unit within the organization.

  20. Interactive searching of facial image databases

    Science.gov (United States)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  1. A UML/OCL framework for design of mediated data federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    This paper describes a general semantic framework for precise specification of so-called mediating systems; such systems provide for tight coupling on a global level of a collection of heterogeneous component databases to a federated database. A mediating system maps in a uniform and systematic

  2. Data Preparation Process for the Buildings Performance Database

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Dunn, Laurel; Mercado, Andrea; Brown, Richard E.; Mathew, Paul

    2014-06-30

    The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation process takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.

  3. Reproducibility of Automated Voice Range Profiles, a Systematic Literature Review

    DEFF Research Database (Denmark)

    Printz, Trine; Rosenberg, Tine; Godballe, Christian

    2018-01-01

    literature on test-retest accuracy of the automated voice range profile assessment. Study design: Systematic review. Data sources: PubMed, Scopus, Cochrane Library, ComDisDome, Embase, and CINAHL (EBSCO). Methods: We conducted a systematic literature search of six databases from 1983 to 2016. The following...

  4. High-throughput ab-initio dilute solute diffusion database.

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  5. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    Science.gov (United States)

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Automation of Tabular Application Formation

    Directory of Open Access Journals (Sweden)

    S. V. Zykin

    2013-01-01

    Full Text Available The paper considers automation problems of the interface formation between a table and a relational database. The task description is formalized and the description of the existing approaches to formation of data representations on an example of widely widespread CASE-tools is submitted. The definition of intermediate data representation as a ”join table” is offered, which is used for maintenance of correctness of data representation formation, and also is necessary for direct and inverse data transformations. On the basis of lossless join property and realized dependencies, the concept and a way of context formation of the application and restrictions is introduced. The considered material is further used for constructing an inverse data transformation from tabular presentation into a relational one. On the basis of relationships properties on a database scheme, the partial order on the relations is established, and the restriction of acyclic databases schemes is introduced. The received results are further used at the analysis of principles of formation of inverse data transformation, and the basic details of such a transformation algorithm are considered.

  7. Reproducibility of Automated Voice Range Profiles, a Systematic Literature Review

    DEFF Research Database (Denmark)

    Printz, Trine; Rosenberg, Tine; Godballe, Christian

    2018-01-01

    literature on test-retest accuracy of the automated voice range profile assessment. Study design: Systematic review. Data sources: PubMed, Scopus, Cochrane Library, ComDisDome, Embase, and CINAHL (EBSCO). Methods: We conducted a systematic literature search of six databases from 1983 to 2016. The following......Objective: Reliable voice range profiles are of great importance when measuring effects and side effects from surgery affecting voice capacity. Automated recording systems are increasingly used, but the reproducibility of results is uncertain. Our objective was to identify and review the existing...... keywords were used: phonetogram, voice range profile, and acoustic voice analysis. Inclusion criteria were automated recording procedure, healthy voices, and no intervention between test and retest. Test-retest values concerning fundamental frequency and voice intensity were reviewed. Results: Of 483...

  8. Integration of TGS and CTEN assays using the CTENFIT analysis and databasing program

    International Nuclear Information System (INIS)

    Estep, R.

    2000-01-01

    The CTEN F IT program, written for Windows 9x/NT in C++, performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplified record keeping tasks

  9. Database Security for an Integrated Solution to Automate Sales Processes in Banking

    OpenAIRE

    Alexandra Maria Ioana FLOREA

    2013-01-01

    In order to maintain a competitive edge in a very active banking market the implementation of a web-based solution to standardize, optimize and manage the flow of sales / pre-sales and generating new leads is requested by a company. This article presents the realization of a development framework for software interoperability in the banking financial institutions and an integrated solution for achieving sales process automation in banking. The paper focuses on presenting the requirements for ...

  10. Will the future of knowledge work automation transform personalized medicine?

    Directory of Open Access Journals (Sweden)

    Gauri Naik

    2014-09-01

    Full Text Available Today, we live in a world of ‘information overload’ which demands high level of knowledge-based work. However, advances in computer hardware and software have opened possibilities to automate ‘routine cognitive tasks’ for knowledge processing. Engineering intelligent software systems that can process large data sets using unstructured commands and subtle judgments and have the ability to learn ‘on the fly’ are a significant step towards automation of knowledge work. The applications of this technology for high throughput genomic analysis, database updating, reporting clinically significant variants, and diagnostic imaging purposes are explored using case studies.

  11. Automated detection of microcalcification clusters in mammograms

    Science.gov (United States)

    Karale, Vikrant A.; Mukhopadhyay, Sudipta; Singh, Tulika; Khandelwal, Niranjan; Sadhu, Anup

    2017-03-01

    Mammography is the most efficient modality for detection of breast cancer at early stage. Microcalcifications are tiny bright spots in mammograms and can often get missed by the radiologist during diagnosis. The presence of microcalcification clusters in mammograms can act as an early sign of breast cancer. This paper presents a completely automated computer-aided detection (CAD) system for detection of microcalcification clusters in mammograms. Unsharp masking is used as a preprocessing step which enhances the contrast between microcalcifications and the background. The preprocessed image is thresholded and various shape and intensity based features are extracted. Support vector machine (SVM) classifier is used to reduce the false positives while preserving the true microcalcification clusters. The proposed technique is applied on two different databases i.e DDSM and private database. The proposed technique shows good sensitivity with moderate false positives (FPs) per image on both databases.

  12. Development a GIS Snowstorm Database

    Science.gov (United States)

    Squires, M. F.

    2010-12-01

    This paper describes the development of a GIS Snowstorm Database (GSDB) at NOAA’s National Climatic Data Center. The snowstorm database is a collection of GIS layers and tabular information for 471 snowstorms between 1900 and 2010. Each snowstorm has undergone automated and manual quality control. The beginning and ending date of each snowstorm is specified. The original purpose of this data was to serve as input for NCDC’s new Regional Snowfall Impact Scale (ReSIS). However, this data is being preserved and used to investigate the impacts of snowstorms on society. GSDB is used to summarize the impact of snowstorms on transportation (interstates) and various classes of facilities (roads, schools, hospitals, etc.). GSDB can also be linked to other sources of impacts such as insurance loss information and Storm Data. Thus the snowstorm database is suited for many different types of users including the general public, decision makers, and researchers. This paper summarizes quality control issues associated with using snowfall data, methods used to identify the starting and ending dates of a storm, and examples of the tables that combine snowfall and societal data.

  13. Software for automated tracking of open items at NRC

    International Nuclear Information System (INIS)

    DeWispelare, A.R.; Mackin, P.C.; Johnson, R.L.

    1995-01-01

    The Open Item Tracking System (OITS) was developed in response to the Nuclear Regulatory Commission (NRC) need for a reliable, easy to use automated database system, to track all open (awaiting resolution) items related to regulatory, institutional, and technical uncertainties for the Department of Energy's (DOE's) high-level waste (HLW) disposal program. The OITS system was integrated with the Regulatory Program Database (RPD) Version 1.1, resulting in the RPD/OITS Version 2.0 system. RPD/OITS is a network bases system with client server architecture and a graphical user interface. This paper outlines the system and results of its implementation

  14. The immune epitope database: a historical retrospective of the first decade.

    Science.gov (United States)

    Salimi, Nima; Fleri, Ward; Peters, Bjoern; Sette, Alessandro

    2012-10-01

    As the amount of biomedical information available in the literature continues to increase, databases that aggregate this information continue to grow in importance and scope. The population of databases can occur either through fully automated text mining approaches or through manual curation by human subject experts. We here report our experiences in populating the National Institute of Allergy and Infectious Diseases sponsored Immune Epitope Database and Analysis Resource (IEDB, http://iedb.org), which was created in 2003, and as of 2012 captures the epitope information from approximately 99% of all papers published to date that describe immune epitopes (with the exception of cancer and HIV data). This was achieved using a hybrid model based on automated document categorization and extensive human expert involvement. This task required automated scanning of over 22 million PubMed abstracts followed by classification and curation of over 13 000 references, including over 7000 infectious disease-related manuscripts, over 1000 allergy-related manuscripts, roughly 4000 related to autoimmunity, and 1000 transplant/alloantigen-related manuscripts. The IEDB curation involves an unprecedented level of detail, capturing for each paper the actual experiments performed for each different epitope structure. Key to enabling this process was the extensive use of ontologies to ensure rigorous and consistent data representation as well as interoperability with other bioinformatics resources, including the Protein Data Bank, Chemical Entities of Biological Interest, and the NIAID Bioinformatics Resource Centers. A growing fraction of the IEDB data derives from direct submissions by research groups engaged in epitope discovery, and is being facilitated by the implementation of novel data submission tools. The present explosion of information contained in biological databases demands effective query and display capabilities to optimize the user experience. Accordingly, the

  15. Smartnotebook: A semi-automated approach to protein sequential NMR resonance assignments

    International Nuclear Information System (INIS)

    Slupsky, Carolyn M.; Boyko, Robert F.; Booth, Valerie K.; Sykes, Brian D.

    2003-01-01

    Complete and accurate NMR spectral assignment is a prerequisite for high-throughput automated structure determination of biological macromolecules. However, completely automated assignment procedures generally encounter difficulties for all but the most ideal data sets. Sources of these problems include difficulty in resolving correlations in crowded spectral regions, as well as complications arising from dynamics, such as weak or missing peaks, or atoms exhibiting more than one peak due to exchange phenomena. Smartnotebook is a semi-automated assignment software package designed to combine the best features of the automated and manual approaches. The software finds and displays potential connections between residues, while the spectroscopist makes decisions on which connection is correct, allowing rapid and robust assignment. In addition, smartnotebook helps the user fit chains of connected residues to the primary sequence of the protein by comparing the experimentally determined chemical shifts with expected shifts derived from a chemical shift database, while providing bookkeeping throughout the assignment procedure

  16. Automation of Personnel Certification Roadbuilding Departments and Enterprises

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article suggests science-based solutions to improve the efficiency of personnel certification of road construction departments and enterprises by developing an integrated learning environment based on an automated system of distance learning with the open source software and methodological support, tailored to the professional standards. Personnel certification procedure of road construction departments and enterprises for optimization and automation is analyzed. Optimization solutions for all stages of the preparation and conduction conformance testing to create test tasks for batch import of test tasks in a database of test tasks and adaptive algorithm for bringing the subject of tests proposed and implemented. The developed technique of adaptive presentation may perform various functions, depending on the settings of the algorithm.

  17. Data-base tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    The authors use a commercial data-base software package to create several data-base products that enhance the ability of experimental physicists to analyze data from the TMX-U experiment. This software resides on a Dec-20 computer in M-Divisions's user service center (USC), where data can be analyzed separately from the main acquisition computers. When these data-base tools are combined with interactive data analysis programs, physicists can perform automated (batch-style) processing or interactive data analysis on the computers in the USC or on the supercomputers of the NMFECC, in addition to the normal processing done on the acquisition system. One data-base tool provides highly reduced data for searching and correlation analysis of several diagnostic signals for a single shot or many shots. A second data-base tool provides retrieval and storage of unreduced data for detailed analysis of one or more diagnostic signals. The authors report how these data-base tools form the core of an evolving off-line data-analysis environment on the USC computers

  18. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  19. The European Fusion Material properties database

    Energy Technology Data Exchange (ETDEWEB)

    Karditsas, P.J. [UKAEA Fusion, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)]. E-mail: panos.karditsas@ukaea.org.uk; Lloyd, G. [Tessella Support Services plc, 3 Vineyard Chambers, Abingdon OX14 3PX (United Kingdom); Walters, M. [Tessella Support Services plc, 3 Vineyard Chambers, Abingdon OX14 3PX (United Kingdom); Peacock, A. [EFDA Close Support Unit, Garching D-85748 (Germany)

    2006-02-15

    Materials research represents a significant part of the European and world effort on fusion research. A European Fusion Materials web-based relational database is being developed to collect, expand and preserve for the future the data produced in support of the NET, DEMO and ITER. The database allows understanding of material properties and their critical parameters for fusion environments. The system uses J2EE technologies and the PostgreSQL relational database, and flexibility ensures that new methods to automate material design for specific applications can be easily implemented. It runs on a web server and allows users access via the Internet using their preferred web browser. The database allows users to store, browse and search raw tests, material properties and qualified data, and electronic reports. For data security, users are issued with individual accounts, and the origin of all requests is checked against a list of trusted sites. Different user accounts have access to different datasets to ensure the data is not shared unintentionally. The system allows several levels of data checking/cleaning and validation. Data insertion is either online or through downloaded templates, and validation is through different expert groups, which can apply different criteria to the data.

  20. Mediation of information and educational mediation: conceptual discussions

    Directory of Open Access Journals (Sweden)

    Helena Célia de Souza Sacerdote

    2016-04-01

    Full Text Available Introduction: This is systematization of theoretical and methodological contributions related to the concepts of mediation information and pedagogical mediation in the literature. Objective: To understand possible intersection of information science and Online Education with regard to these concepts to check that both can be considered as analogous in its essence and practice. Methodology: Literature review based on literature by consulting the scientific productions selected in search of SciELO.ORG databases and EBSCO Host, the portal of CAPES / MEC and Google Scholar. Results: The most cited concepts in information science and education were de Almeida Junior (2009 and Masetto (2013, respectively. Conclusion: It is observed that the concept of mediation can move interchangeably between both areas. This is because the evidence found in the productions of the last five years indicate that the concept of information of mediation seems to have found its bases in education (educational psychology.

  1. Experience in the development of an automated data retrieval system in radiology

    International Nuclear Information System (INIS)

    Zhakov, I.G.; Kratenok, V.E.; Gorel'ko, K.P.; Leoshkevich, N.V.

    1988-01-01

    The first version of an automated data retrival system in radiology, radiobiology and oncology has been developed in the Research Institute of Oncology and medical Radiology of the Ministry of Health, Byelorussian Soviet Socialist Republic. The system is realized on the basis of a packet of applied programs of an automated document processing system, computerized data-bases of the All-Union Scienctific and Technical Information Institute and the ES-1022 computer. the system functions in the following modes: 1 - selective propagation of information on 194 fixed requests of users; 2 - personal search in the dialogue mode; 3 - updating of data files. The use of the automated system made it possible to enhance the effectiveness and quality of document search as compared to conventinal forms of operation

  2. Time-Critical Database Conditions Data-Handling for the CMS Experiment

    CERN Document Server

    De Gruttola, M; Innocente, V; Pierro, A

    2011-01-01

    Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are ``dropped{''} by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.

  3. Database Security for an Integrated Solution to Automate Sales Processes in Banking

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2013-05-01

    Full Text Available In order to maintain a competitive edge in a very active banking market the implementation of a web-based solution to standardize, optimize and manage the flow of sales / pre-sales and generating new leads is requested by a company. This article presents the realization of a development framework for software interoperability in the banking financial institutions and an integrated solution for achieving sales process automation in banking. The paper focuses on presenting the requirements for security and confidentiality of stored data and also on presenting the identified techniques and procedures to implement these requirements.

  4. Data management and database framework for the MICE experiment

    Science.gov (United States)

    Martyniak, J.; Nebrensky, J. J.; Rajaram, D.; MICE Collaboration

    2017-10-01

    The international Muon Ionization Cooling Experiment (MICE) currently operating at the Rutherford Appleton Laboratory in the UK, is designed to demonstrate the principle of muon ionization cooling for application to a future Neutrino Factory or Muon Collider. We present the status of the framework for the movement and curation of both raw and reconstructed data. A raw data-mover has been designed to safely upload data files onto permanent tape storage as soon as they have been written out. The process has been automated, and checks have been built in to ensure the integrity of data at every stage of the transfer. The data processing framework has been recently redesigned in order to provide fast turnaround of reconstructed data for analysis. The automated reconstruction is performed on a dedicated machine in the MICE control room and any reprocessing is done at Tier-2 Grid sites. In conjunction with this redesign, a new reconstructed-data-mover has been designed and implemented. We also review the implementation of a robust database system that has been designed for MICE. The processing of data, whether raw or Monte Carlo, requires accurate knowledge of the experimental conditions. MICE has several complex elements ranging from beamline magnets to particle identification detectors to superconducting magnets. A Configuration Database, which contains information about the experimental conditions (magnet currents, absorber material, detector calibrations, etc.) at any given time has been developed to ensure accurate and reproducible simulation and reconstruction. A fully replicated, hot-standby database system has been implemented with a firewall-protected read-write master running in the control room, and a read-only slave running at a different location. The actual database is hidden from end users by a Web Service layer, which provides platform and programming language-independent access to the data.

  5. Fully automated data collection and processing system on macromolecular crystallography beamlines at the PF

    International Nuclear Information System (INIS)

    Yamada, Yusuke; Hiraki, Masahiko; Matsugaki, Naohiro; Chavas, Leonard M.G.; Igarashi, Noriyuki; Wakatsuki, Soichi

    2012-01-01

    Fully automated data collection and processing system has been developed on macromolecular crystallography beamlines at the Photon Factory. In this system, the sample exchange, centering and data collection are sequentially performed for all samples stored in the sample exchange system at a beamline without any manual operations. Data processing of collected data sets is also performed automatically. These results are stored into the database system, and users can monitor the progress and results of automated experiment via a Web browser. (author)

  6. A systematic literature review of automated clinical coding and classification systems.

    Science.gov (United States)

    Stanfill, Mary H; Williams, Margaret; Fenton, Susan H; Jenders, Robert A; Hersh, William R

    2010-01-01

    Clinical coding and classification processes transform natural language descriptions in clinical text into data that can subsequently be used for clinical care, research, and other purposes. This systematic literature review examined studies that evaluated all types of automated coding and classification systems to determine the performance of such systems. Studies indexed in Medline or other relevant databases prior to March 2009 were considered. The 113 studies included in this review show that automated tools exist for a variety of coding and classification purposes, focus on various healthcare specialties, and handle a wide variety of clinical document types. Automated coding and classification systems themselves are not generalizable, nor are the results of the studies evaluating them. Published research shows these systems hold promise, but these data must be considered in context, with performance relative to the complexity of the task and the desired outcome.

  7. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  8. A database on electric vehicle use in Sweden. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Fridstrand, Niklas [Lund Univ. (Sweden). Dept. of Industrial Electrical Engineering and Automation

    2000-05-01

    The Department of Industrial Electrical Engineering and Automation (IEA) at the Lund Institute of Technology (LTH), has taken responsibility for developing and maintaining a database on electric and hybrid road vehicles in Sweden. The Swedish Transport and Communications Research Board, (KFB) initiated the development of this database. Information is collected from three major cities in Sweden: Malmoe, Gothenburg and Stockholm, as well as smaller cities such as Skellefteaa and Haernoesand in northern Sweden. This final report summarises the experience gained during the development and maintenance of the database from February 1996 to December 1999. Our aim was to construct a well-functioning database for the evaluation of electric and hybrid road vehicles in Sweden. The database contains detailed information on several years' use of electric vehicles (EVs) in Sweden (for example, 220 million driving records). Two data acquisition systems were used, one less and one more complex with respect to the number of quantities logged. Unfortunately, data collection was not complete, due to malfunctioning of the more complex system, and due to human factors for the less complex system.

  9. Translation from the collaborative OSM database to cartography

    Science.gov (United States)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  10. ZeBase: an open-source relational database for zebrafish laboratories.

    Science.gov (United States)

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  11. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    Science.gov (United States)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  12. Automation and semantics: the CombeChem experience

    OpenAIRE

    Frey, Jeremy G.

    2004-01-01

    Some of the experiences of the CombeChem e-Science project in relation to both automation and the need for semantics in combining modern computer science techniques and chemistry are discussed. In particular the aspects of the smart laboratory, large scale data handling and the way this impacts on the necessary database technology are discussed. In addition some of the ways in which the grid can enable greater user interaction with services such as the National Crystallography Service and imp...

  13. Design automation of switching mode high voltage power supply for nuclear instruments

    International Nuclear Information System (INIS)

    El-araby, S.M.S.

    1999-01-01

    This paper presents an automation procedure for the design of switching mode high voltage power supplies, using Pc programming facility. The procedure permits the selection of a ready made or designed ferrite transformer. This selection could be achieved according to the designer desire; as the program includes complete information about ready made ferrite transformer through complete database. The procedure is based on suggested template circuit. Micro-Cap IV simulation package is used to verify the desired high voltage power supply design. Simulation results agree quite well with suggested procedure's results. Design aspects and development needed to increase automation capabilities are also discussed

  14. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  15. DMPD: Complement-mediated phagocytosis--the role of Syk. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 16754322 Complement-mediated phagocytosis--the role of Syk. Tohyama Y, Yamamura H. ...IUBMB Life. 2006 May-Jun;58(5-6):304-8. (.png) (.svg) (.html) (.csml) Show Complement-mediated phagocytosis-...-the role of Syk. PubmedID 16754322 Title Complement-mediated phagocytosis--the role of Syk. Authors Tohyama

  16. Automated surveillance system for hospital-acquired urinary tract infections in Denmark

    DEFF Research Database (Denmark)

    Condell, Orla; Gubbels, Sophie; Nielsen, J

    2016-01-01

    BACKGROUND: The Danish Hospital-Acquired Infections Database (HAIBA) is an automated surveillance system using hospital administrative, microbiological, and antibiotic medication data. AIM: To define and evaluate the case definition for hospital-acquired urinary tract infection (HA-UTI) and to de......BACKGROUND: The Danish Hospital-Acquired Infections Database (HAIBA) is an automated surveillance system using hospital administrative, microbiological, and antibiotic medication data. AIM: To define and evaluate the case definition for hospital-acquired urinary tract infection (HA-UTI......) and to describe surveillance data from 2010 to 2014. METHODS: The HA-UTI algorithm defined a laboratory-diagnosed UTI as a urine culture positive for no more than two micro-organisms with at least one at ≥10(4)cfu/mL, and a probable UTI as a negative urine culture and a relevant diagnosis code or antibiotic...... treatment. UTI was considered hospital-acquired if a urine sample was collected ≥48h after admission and UTI was calculated per 10,000 risk-days. For validation, prevalence was calculated for each day and compared to point prevalence survey (PPS) data. FINDINGS: HAIBA...

  17. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study.

    Science.gov (United States)

    Bobo, William V; Cooper, William O; Stein, C Michael; Olfson, Mark; Mounsey, Jackie; Daugherty, James; Ray, Wayne A

    2012-08-24

    We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6-24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.

  18. Specdata: Automated Analysis Software for Broadband Spectra

    Science.gov (United States)

    Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.

    2017-06-01

    With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.

  19. AUTOMATION OF CONTROL OF THE BUSINESS PROCESS OF PUBLISHING SCIENTIFIC JOURNALS

    Directory of Open Access Journals (Sweden)

    O. Yu. Sakaliuk

    2016-09-01

    Full Text Available We consider business process automation publishing scientific journals. It describes the focal point of publishing houses Odessa National Academy of Food Technology and the automation of business processes. A complex business process models publishing scientific journals. Analyzed organizational structure of Coordinating Centre of Scientific Journals' Publishing ONAFT structure and created its model. A process model simulation conducted business process notation eEPC and BPMN. Also held database design, creation of file structure and create AIS interface. Implemented interaction with the webcam. Justification feasibility of software development, and the definition of performance based on the results petal chart, it is safe to say that an automated way to much more efficient compared to manual mode. The developed software will accelerate the development of scientific periodicals ONAFT, which in turn improve the academy ratings at the global level, improve its image and credibility.

  20. Bayesian dynamic mediation analysis.

    Science.gov (United States)

    Huang, Jing; Yuan, Ying

    2017-12-01

    Most existing methods for mediation analysis assume that mediation is a stationary, time-invariant process, which overlooks the inherently dynamic nature of many human psychological processes and behavioral activities. In this article, we consider mediation as a dynamic process that continuously changes over time. We propose Bayesian multilevel time-varying coefficient models to describe and estimate such dynamic mediation effects. By taking the nonparametric penalized spline approach, the proposed method is flexible and able to accommodate any shape of the relationship between time and mediation effects. Simulation studies show that the proposed method works well and faithfully reflects the true nature of the mediation process. By modeling mediation effect nonparametrically as a continuous function of time, our method provides a valuable tool to help researchers obtain a more complete understanding of the dynamic nature of the mediation process underlying psychological and behavioral phenomena. We also briefly discuss an alternative approach of using dynamic autoregressive mediation model to estimate the dynamic mediation effect. The computer code is provided to implement the proposed Bayesian dynamic mediation analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Role of Database Management Systems in Selected Engineering Institutions of Andhra Pradesh: An Analytical Survey

    Directory of Open Access Journals (Sweden)

    Kutty Kumar

    2016-06-01

    Full Text Available This paper aims to analyze the function of database management systems from the perspective of librarians working in engineering institutions in Andhra Pradesh. Ninety-eight librarians from one hundred thirty engineering institutions participated in the study. The paper reveals that training by computer suppliers and software packages are the significant mode of acquiring DBMS skills by librarians; three-fourths of the librarians are postgraduate degree holders. Most colleges use database applications for automation purposes and content value. Electrical problems and untrained staff seem to be major constraints faced by respondents for managing library databases.

  2. Automated collection and dissemination of ionospheric data from the digisonde network

    Directory of Open Access Journals (Sweden)

    B.W. Reinisch

    2004-01-01

    Full Text Available The growing demand for fast access to accurate ionospheric electron density profiles and ionospheric characteristics calls for efficient dissemination of data from the many ionosondes operating around the globe. The global digisonde network with over 70 stations takes advantage of the Internet to make many of these sounders remotely accessible for data transfer and control. Key elements of the digisonde system data management are the visualization and editing tool SAO Explorer, the digital ionogram database DIDBase, holding raw and derived digisonde data under an industrial-strength database management system, and the automated data request execution system ADRES.

  3. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  4. Improving medical stores management through automation and effective communication.

    Science.gov (United States)

    Kumar, Ashok; Cariappa, M P; Marwaha, Vishal; Sharma, Mukti; Arora, Manu

    2016-01-01

    Medical stores management in hospitals is a tedious and time consuming chore with limited resources tasked for the purpose and poor penetration of Information Technology. The process of automation is slow paced due to various inherent factors and is being challenged by the increasing inventory loads and escalating budgets for procurement of drugs. We carried out an indepth case study at the Medical Stores of a tertiary care health care facility. An iterative six step Quality Improvement (QI) process was implemented based on the Plan-Do-Study-Act (PDSA) cycle. The QI process was modified as per requirement to fit the medical stores management model. The results were evaluated after six months. After the implementation of QI process, 55 drugs of the medical store inventory which had expired since 2009 onwards were replaced with fresh stock by the suppliers as a result of effective communication through upgraded database management. Various pending audit objections were dropped due to the streamlined documentation and processes. Inventory management improved drastically due to automation, with disposal orders being initiated four months prior to the expiry of drugs and correct demands being generated two months prior to depletion of stocks. The monthly expense summary of drugs was now being done within ten days of the closing month. Improving communication systems within the hospital with vendor database management and reaching out to clinicians is important. Automation of inventory management requires to be simple and user-friendly, utilizing existing hardware. Physical stores monitoring is indispensable, especially due to the scattered nature of stores. Staff training and standardized documentation protocols are the other keystones for optimal medical store management.

  5. Exploiting automatically generated databases of traffic signs and road markings for contextual co-occurrence analysis

    Science.gov (United States)

    Hazelhoff, Lykele; Creusen, Ivo M.; Woudsma, Thomas; de With, Peter H. N.

    2015-11-01

    Combined databases of road markings and traffic signs provide a complete and full description of the present traffic legislation and instructions. Such databases contribute to efficient signage maintenance, improve navigation, and benefit autonomous driving vehicles. A system is presented for the automated creation of such combined databases, which additionally investigates the benefit of this combination for automated contextual placement analysis. This analysis involves verification of the co-occurrence of traffic signs and road markings to retrieve a list of potentially incorrectly signaled (and thus potentially unsafe) road situations. This co-occurrence verification is specifically explored for both pedestrian crossings and yield situations. Evaluations on 420 km of road have shown that individual detection of traffic signs and road markings denoting these road situations can be performed with accuracies of 98% and 85%, respectively. Combining both approaches shows that over 95% of the pedestrian crossings and give-way situations can be identified. An exploration toward additional co-occurrence analysis of signs and markings shows that inconsistently signaled situations can successfully be extracted, such that specific safety actions can be directed toward cases lacking signs or markings, while most consistently signaled situations can be omitted from this analysis.

  6. Analysis and databasing software for integrated tomographic gamma scanner (TGS) and passive-active neutron (PAN) assay systems

    International Nuclear Information System (INIS)

    Estep, R.J.; Melton, S.G.; Buenafe, C.

    2000-01-01

    The CTEN-FIT program, written for Windows 9x/NT in C++,performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplify record keeping tasks

  7. The reference ballistic imaging database revisited.

    Science.gov (United States)

    De Ceuster, Jan; Dujardin, Sylvain

    2015-03-01

    A reference ballistic image database (RBID) contains images of cartridge cases fired in firearms that are in circulation: a ballistic fingerprint database. The performance of an RBID was investigated a decade ago by De Kinder et al. using IBIS(®) Heritage™ technology. The results of that study were published in this journal, issue 214. Since then, technologies have evolved quite significantly and novel apparatus have become available on the market. The current research article investigates the efficiency of another automated ballistic imaging system, Evofinder(®) using the same database as used by De Kinder et al. The results demonstrate a significant increase in correlation efficiency: 38% of all matches were on first position of the Evofinder correlation list in comparison to IBIS(®) Heritage™ where only 19% were on the first position. Average correlation times are comparable to the IBIS(®) Heritage™ system. While Evofinder(®) demonstrates specific improvement for mutually correlating different ammunition brands, ammunition dependence of the markings is still strongly influencing the correlation result because the markings may vary considerably. As a consequence a great deal of potential hits (36%) was still far down in the correlation lists (positions 31 and lower). The large database was used to examine the probability of finding a match as a function of correlation list verification. As an example, the RBID study on Evofinder(®) demonstrates that to find at least 90% of all potential matches, at least 43% of the items in the database need to be compared on screen and this for breech face markings and firing pin impression separately. These results, although a clear improvement to the original RBID study, indicate that the implementation of such a database should still not be considered nowadays. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. DMPD: Negative regulation of cytoplasmic RNA-mediated antiviral signaling. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18703349 Negative regulation of cytoplasmic RNA-mediated antiviral signaling. Komur...Show Negative regulation of cytoplasmic RNA-mediated antiviral signaling. PubmedID 18703349 Title Negative r...egulation of cytoplasmic RNA-mediated antiviral signaling. Authors Komuro A, Bamm

  9. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study

    Directory of Open Access Journals (Sweden)

    Bobo William V

    2012-08-01

    Full Text Available Abstract Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%, of which 41 (89.1% met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes.

  10. Positive predictive value of a case definition for diabetes mellitus using automated administrative health data in children and youth exposed to antipsychotic drugs or control medications: a Tennessee Medicaid study

    Science.gov (United States)

    2012-01-01

    Background We developed and validated an automated database case definition for diabetes in children and youth to facilitate pharmacoepidemiologic investigations of medications and the risk of diabetes. Methods The present study was part of an in-progress retrospective cohort study of antipsychotics and diabetes in Tennessee Medicaid enrollees aged 6–24 years. Diabetes was identified from diabetes-related medical care encounters: hospitalizations, outpatient visits, and filled prescriptions. The definition required either a primary inpatient diagnosis or at least two other encounters of different types, most commonly an outpatient diagnosis with a prescription. Type 1 diabetes was defined by insulin prescriptions with at most one oral hypoglycemic prescription; other cases were considered type 2 diabetes. The definition was validated for cohort members in the 15 county region geographically proximate to the investigators. Medical records were reviewed and adjudicated for cases that met the automated database definition as well as for a sample of persons with other diabetes-related medical care encounters. Results The study included 64 cases that met the automated database definition. Records were adjudicated for 46 (71.9%), of which 41 (89.1%) met clinical criteria for newly diagnosed diabetes. The positive predictive value for type 1 diabetes was 80.0%. For type 2 and unspecified diabetes combined, the positive predictive value was 83.9%. The estimated sensitivity of the definition, based on adjudication for a sample of 30 cases not meeting the automated database definition, was 64.8%. Conclusion These results suggest that the automated database case definition for diabetes may be useful for pharmacoepidemiologic studies of medications and diabetes. PMID:22920280

  11. DMPD: Fragments of extracellular matrix as mediators of inflammation. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18243041 Fragments of extracellular matrix as mediators of inflammation. Adair-Kirk...l) Show Fragments of extracellular matrix as mediators of inflammation. PubmedID 18243041 Title Fragments of... extracellular matrix as mediators of inflammation. Authors Adair-Kirk TL, Senior

  12. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    Science.gov (United States)

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  13. Full-text automated detection of surgical site infections secondary to neurosurgery in Rennes, France.

    Science.gov (United States)

    Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc

    2013-01-01

    The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.

  14. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  15. An Automated System for Garment Texture Design Class Identification

    Directory of Open Access Journals (Sweden)

    Emon Kumar Dey

    2015-09-01

    Full Text Available Automatic identification of garment design class might play an important role in the garments and fashion industry. To achieve this, essential initial works are found in the literature. For example, construction of a garment database, automatic segmentation of garments from real life images, categorizing them into the type of garments such as shirts, jackets, tops, skirts, etc. It is now essential to find a system such that it will be possible to identify the particular design (printed, striped or single color of garment product for an automated system to recommend the garment trends. In this paper, we have focused on this specific issue and thus propose two new descriptors namely Completed CENTRIST (cCENTRIST and Ternary CENTRIST (tCENTRIST. To test these descriptors, we used two different publically available databases. The experimental results of these databases demonstrate that both cCENTRIST and tCENTRIST achieve nearly about 3% more accuracy than the existing state-of-the art methods.

  16. MetReS, an Efficient Database for Genomic Applications.

    Science.gov (United States)

    Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc

    2018-02-01

    MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.

  17. Feasibility evaluation of 3 automated cellular drug screening assays on a robotic workstation.

    Science.gov (United States)

    Soikkeli, Anne; Sempio, Cristina; Kaukonen, Ann Marie; Urtti, Arto; Hirvonen, Jouni; Yliperttula, Marjo

    2010-01-01

    This study presents the implementation and optimization of 3 cell-based assays on a TECAN Genesis workstation-the Caspase-Glo 3/7 and sulforhodamine B (SRB) screening assays and the mechanistic Caco-2 permeability protocol-and evaluates their feasibility for automation. During implementation, the dispensing speed to add drug solutions and fixative trichloroacetic acid and the aspiration speed to remove the supernatant immediately after fixation were optimized. Decontamination steps for cleaning the tips and pipetting tubing were also added. The automated Caspase-Glo 3/7 screen was successfully optimized with Caco-2 cells (Z' 0.7, signal-to-base ratio [S/B] 1.7) but not with DU-145 cells. In contrast, the automated SRB screen was successfully optimized with the DU-145 cells (Z' 0.8, S/B 2.4) but not with the Caco-2 cells (Z' -0.8, S/B 1.4). The automated bidirectional Caco-2 permeability experiments separated successfully low- and high-permeability compounds (Z' 0.8, S/B 84.2) and passive drug permeation from efflux-mediated transport (Z' 0.5, S/B 8.6). Of the assays, the homogeneous Caspase-Glo 3/7 assay benefits the most from automation, but also the heterogeneous SRB assay and Caco-2 permeability experiments gain advantages from automation.

  18. Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System.

    Science.gov (United States)

    Balfe, Nora; Sharples, Sarah; Wilson, John R

    2018-06-01

    This paper aims to explore the role of factors pertaining to trust in real-world automation systems through the application of observational methods in a case study from the railway sector. Trust in automation is widely acknowledged as an important mediator of automation use, but the majority of the research on automation trust is based on laboratory work. In contrast, this work explored trust in a real-world setting. Experienced rail operators in four signaling centers were observed for 90 min, and their activities were coded into five mutually exclusive categories. Their observed activities were analyzed in relation to their reported trust levels, collected via a questionnaire. The results showed clear differences in activity, even when circumstances on the workstations were very similar, and significant differences in some trust dimensions were found between groups exhibiting different levels of intervention and time not involved with signaling. Although the empirical, lab-based studies in the literature have consistently found that reliability and competence of the automation are the most important aspects of trust development, understanding of the automation emerged as the strongest dimension in this study. The implications are that development and maintenance of trust in real-world, safety-critical automation systems may be distinct from artificial laboratory automation. The findings have important implications for emerging automation concepts in diverse industries including highly automated vehicles and Internet of things.

  19. Proceedings of the distribution automation seminar. CD-ROM ed.

    International Nuclear Information System (INIS)

    2003-01-01

    Electric utilities are being driven to improve the utilization of their distribution system assets while reducing life cycle costs. This seminar provided an opportunity for electric utilities to share their experience and knowledge about the constantly evolving technologies that apply to distributed automation. Customers and their representatives place increased priority on regulatory commissions to achieve reliability and push the conventional use of distribution automation into rural areas. Various options are under consideration by managers to incorporate a variety of distributed generation resources. Several papers highlighted technical aspects as they relate to applications to meet the changing needs of utilities. The latest products and technologies in the field were on display. The seminar sessions included: business cases; utility experience and applications; utility experience and projects; and, technology and equipment. Eight presentations were indexed separately for inclusion in this database

  20. GETPrime: a gene- or transcript-specific primer database for quantitative real-time PCR.

    Science.gov (United States)

    Gubelmann, Carine; Gattiker, Alexandre; Massouras, Andreas; Hens, Korneel; David, Fabrice; Decouttere, Frederik; Rougemont, Jacques; Deplancke, Bart

    2011-01-01

    The vast majority of genes in humans and other organisms undergo alternative splicing, yet the biological function of splice variants is still very poorly understood in large part because of the lack of simple tools that can map the expression profiles and patterns of these variants with high sensitivity. High-throughput quantitative real-time polymerase chain reaction (qPCR) is an ideal technique to accurately quantify nucleic acid sequences including splice variants. However, currently available primer design programs do not distinguish between splice variants and also differ substantially in overall quality, functionality or throughput mode. Here, we present GETPrime, a primer database supported by a novel platform that uniquely combines and automates several features critical for optimal qPCR primer design. These include the consideration of all gene splice variants to enable either gene-specific (covering the majority of splice variants) or transcript-specific (covering one splice variant) expression profiling, primer specificity validation, automated best primer pair selection according to strict criteria and graphical visualization of the latter primer pairs within their genomic context. GETPrime primers have been extensively validated experimentally, demonstrating high transcript specificity in complex samples. Thus, the free-access, user-friendly GETPrime database allows fast primer retrieval and visualization for genes or groups of genes of most common model organisms, and is available at http://updepla1srv1.epfl.ch/getprime/. Database URL: http://deplanckelab.epfl.ch.

  1. Automating Ontological Annotation with WordNet

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob L.; Hohimer, Ryan E.; White, Amanda M.

    2006-01-22

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  2. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  3. Australia's continental-scale acoustic tracking database and its automated quality control process

    Science.gov (United States)

    Hoenner, Xavier; Huveneers, Charlie; Steckenreuter, Andre; Simpfendorfer, Colin; Tattersall, Katherine; Jaine, Fabrice; Atkins, Natalia; Babcock, Russ; Brodie, Stephanie; Burgess, Jonathan; Campbell, Hamish; Heupel, Michelle; Pasquer, Benedicte; Proctor, Roger; Taylor, Matthew D.; Udyawer, Vinay; Harcourt, Robert

    2018-01-01

    Our ability to predict species responses to environmental changes relies on accurate records of animal movement patterns. Continental-scale acoustic telemetry networks are increasingly being established worldwide, producing large volumes of information-rich geospatial data. During the last decade, the Integrated Marine Observing System's Animal Tracking Facility (IMOS ATF) established a permanent array of acoustic receivers around Australia. Simultaneously, IMOS developed a centralised national database to foster collaborative research across the user community and quantify individual behaviour across a broad range of taxa. Here we present the database and quality control procedures developed to collate 49.6 million valid detections from 1891 receiving stations. This dataset consists of detections for 3,777 tags deployed on 117 marine species, with distances travelled ranging from a few to thousands of kilometres. Connectivity between regions was only made possible by the joint contribution of IMOS infrastructure and researcher-funded receivers. This dataset constitutes a valuable resource facilitating meta-analysis of animal movement, distributions, and habitat use, and is important for relating species distribution shifts with environmental covariates.

  4. Automated voxel-based analysis of brain perfusion SPECT for vasospasm after subarachnoid haemorrhage

    International Nuclear Information System (INIS)

    Iwabuchi, S.; Yokouchi, T.; Hayashi, M.; Kimura, H.; Tomiyama, A.; Hirata, Y.; Saito, N.; Harashina, J.; Nakayama, H.; Sato, K.; Aoki, K.; Samejima, H.; Ueda, M.; Terada, H.; Hamazaki, K.

    2008-01-01

    We evaluated regional cerebral blood flow (rCBF) during vasospasm after subarachnoid haemorrhage ISAH) using automated voxel-based analysis of brain perfusion single-photon emission computed tomography (SPELT). Brain perfusion SPECT was performed 7 to 10 days after onset of SAH. Automated voxel-based analysis of SPECT used a Z-score map that was calculated by comparing the patients data with a control database. In cases where computed tomography (CT) scans detected an ischemic region due to vasospasm, automated voxel-based analysis of brain perfusion SPECT revealed dramatically reduced rCBF (Z-score ≤ -4). No patients with mildly or moderately diminished rCBF (Z-score > -3) progressed to cerebral infarction. Some patients with a Z-score < -4 did not progress to cerebral infarction after active treatment with a angioplasty. Three-dimensional images provided detailed anatomical information and helped us to distinguish surgical sequelae from vasospasm. In conclusion, automated voxel-based analysis of brain perfusion SPECT using a Z-score map is helpful in evaluating decreased rCBF due to vasospasm. (author)

  5. The Molecular Signatures Database (MSigDB) hallmark gene set collection.

    Science.gov (United States)

    Liberzon, Arthur; Birger, Chet; Thorvaldsdóttir, Helga; Ghandi, Mahmoud; Mesirov, Jill P; Tamayo, Pablo

    2015-12-23

    The Molecular Signatures Database (MSigDB) is one of the most widely used and comprehensive databases of gene sets for performing gene set enrichment analysis. Since its creation, MSigDB has grown beyond its roots in metabolic disease and cancer to include >10,000 gene sets. These better represent a wider range of biological processes and diseases, but the utility of the database is reduced by increased redundancy across, and heterogeneity within, gene sets. To address this challenge, here we use a combination of automated approaches and expert curation to develop a collection of "hallmark" gene sets as part of MSigDB. Each hallmark in this collection consists of a "refined" gene set, derived from multiple "founder" sets, that conveys a specific biological state or process and displays coherent expression. The hallmarks effectively summarize most of the relevant information of the original founder sets and, by reducing both variation and redundancy, provide more refined and concise inputs for gene set enrichment analysis.

  6. An automated procedure for identification of a person using gait analysis

    Directory of Open Access Journals (Sweden)

    Alena Galajdová

    2016-10-01

    Full Text Available Different biometric methods are available for identification purpose of a person. The most commonly used are fingerprints, but there are also other biometric methods such as voice, morphology of ears, structure of iris and so on. In some cases, it is required to identify a person according to his/her biomechanical parameters or even his/her gait pattern. Gait is an outstanding biometric behavioural characteristic that is not widely used yet for identification purposes because efficient and proven automated processes are not yet available. Several systems and gait pattern databases have been developed for rapid evaluation and processing of gait. This article describes an original automated evaluation procedure of gait pattern and identification of unique gait parameters for automatic identification purposes.

  7. [Automated anesthesia record systems].

    Science.gov (United States)

    Heinrichs, W; Mönk, S; Eberle, B

    1997-07-01

    The introduction of electronic anaesthesia documentation systems was attempted as early as in 1979, although their efficient application has become reality only in the past few years. The advantages of the electronic protocol are apparent: Continuous high quality documentation, comparability of data due to the availability of a data bank, reduction in the workload of the anaesthetist and availability of additional data. Disadvantages of the electronic protocol have also been discussed in the literature. By going through the process of entering data on the course of the anaesthetic procedure on the protocol sheet, the information is mentally absorbed and evaluated by the anaesthetist. This information may, however, be lost when the data are recorded fully automatically-without active involvement on the part of the anaesthetist. Recent publications state that by using intelligent alarms and/or integrated displays manual record keeping is no longer necessary for anaesthesia vigilance. The technical design of automated anaesthesia records depends on an integration of network technology into the hospital. It will be appropriate to connect the systems to the internet, but safety requirements have to be followed strictly. Concerning the database, client server architecture as well as language standards like SQL should be used. Object oriented databases will be available in the near future. Another future goal of automated anaesthesia record systems will be using knowledge based technologies within these systems. Drug interactions, disease related anaesthetic techniques and other information sources can be integrated. At this time, almost none of the commercially available systems has matured to a point where their purchase can be recommended without reservation. There is still a lack of standards for the subsequent exchange of data and a solution to a number of ergonomic problems still remains to be found. Nevertheless, electronic anaesthesia protocols will be required in

  8. Web-mediated database for internet-based dental radiology teaching files constructed by 5th-year undergraduate students

    International Nuclear Information System (INIS)

    Kito, Shinji; Wakasugi-Sato, Nao; Matsumoto-Takeda, Shinobu; Oda, Masafumi; Tanaka, Tatsurou; Fukai, Yasuhiro; Tokitsu, Takatoshi; Morimoto, Yasuhiro

    2009-01-01

    To provide oral healthcare for patients of all ages, dental welfare environments and technical aspects of dentistry have evolved and developed and dental education must also diversify. Student-centered voluntary education and establishment of a life-long self-learning environment are becoming increasingly important in the changing world of dental education. In this article, we introduce a new process for the construction of a web-mediated database containing internet-based teaching files on the normal radiological anatomy of panoramic radiographs and CT images of the oral and maxillofacial regions, as well as a system for the delivery of visual learning materials through an intra-faculty local network. This process was developed by our 5th-year undergraduate students. Animated CT scan images were produced using Macintosh Iphoto and Imovie animation software. Normal anatomical images of panoramic radiographs and CT scans were produced using Adobe Illustrator CS and Adobe Photoshop CS. The web database was constructed using Macromedia Dreamweaver MX and Microsoft Internet Explorer. This project was the basis of our participation in the Student Clinician Research Program (SCRP). At Kyushu Dental College, we developed a new series of teaching files on the web. Uploading these teaching files to the internet allowed many individuals to access the information. Viewers can easily select the area of study that they wish to examine. These processes suggest that our laboratory practice is a useful tool for promoting students' motivation and improving life-long self learning in dental radiology. We expect that many medical and dental students, practitioners and patients will be able to use our teaching files to learn about the normal radiological anatomy of the oral and maxillofacial regions.(author)

  9. PSSRdb: a relational database of polymorphic simple sequence repeats extracted from prokaryotic genomes.

    Science.gov (United States)

    Kumar, Pankaj; Chaitanya, Pasumarthy S; Nagarajaram, Hampapathalu A

    2011-01-01

    PSSRdb (Polymorphic Simple Sequence Repeats database) (http://www.cdfd.org.in/PSSRdb/) is a relational database of polymorphic simple sequence repeats (PSSRs) extracted from 85 different species of prokaryotes. Simple sequence repeats (SSRs) are the tandem repeats of nucleotide motifs of the sizes 1-6 bp and are highly polymorphic. SSR mutations in and around coding regions affect transcription and translation of genes. Such changes underpin phase variations and antigenic variations seen in some bacteria. Although SSR-mediated phase variation and antigenic variations have been well-studied in some bacteria there seems a lot of other species of prokaryotes yet to be investigated for SSR mediated adaptive and other evolutionary advantages. As a part of our on-going studies on SSR polymorphism in prokaryotes we compared the genome sequences of various strains and isolates available for 85 different species of prokaryotes and extracted a number of SSRs showing length variations and created a relational database called PSSRdb. This database gives useful information such as location of PSSRs in genomes, length variation across genomes, the regions harboring PSSRs, etc. The information provided in this database is very useful for further research and analysis of SSRs in prokaryotes.

  10. Contextualization of drug-mediator relations using evidence networks.

    Science.gov (United States)

    Tran, Hai Joey; Speyer, Gil; Kiefer, Jeff; Kim, Seungchan

    2017-05-31

    Genomic analysis of drug response can provide unique insights into therapies that can be used to match the "right drug to the right patient." However, the process of discovering such therapeutic insights using genomic data is not straightforward and represents an area of active investigation. EDDY (Evaluation of Differential DependencY), a statistical test to detect differential statistical dependencies, is one method that leverages genomic data to identify differential genetic dependencies. EDDY has been used in conjunction with the Cancer Therapeutics Response Portal (CTRP), a dataset with drug-response measurements for more than 400 small molecules, and RNAseq data of cell lines in the Cancer Cell Line Encyclopedia (CCLE) to find potential drug-mediator pairs. Mediators were identified as genes that showed significant change in genetic statistical dependencies within annotated pathways between drug sensitive and drug non-sensitive cell lines, and the results are presented as a public web-portal (EDDY-CTRP). However, the interpretability of drug-mediator pairs currently hinders further exploration of these potentially valuable results. In this study, we address this challenge by constructing evidence networks built with protein and drug interactions from the STITCH and STRING interaction databases. STITCH and STRING are sister databases that catalog known and predicted drug-protein interactions and protein-protein interactions, respectively. Using these two databases, we have developed a method to construct evidence networks to "explain" the relation between a drug and a mediator.  RESULTS: We applied this approach to drug-mediator relations discovered in EDDY-CTRP analysis and identified evidence networks for ~70% of drug-mediator pairs where most mediators were not known direct targets for the drug. Constructed evidence networks enable researchers to contextualize the drug-mediator pair with current research and knowledge. Using evidence networks, we were

  11. Automated Single Cell Data Decontamination Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Tennessen, Kristin [Lawrence Berkeley National Lab. (LBNL), Walnut Creek, CA (United States). Dept. of Energy Joint Genome Inst.; Pati, Amrita [Lawrence Berkeley National Lab. (LBNL), Walnut Creek, CA (United States). Dept. of Energy Joint Genome Inst.

    2014-03-21

    Recent technological advancements in single-cell genomics have encouraged the classification and functional assessment of microorganisms from a wide span of the biospheres phylogeny.1,2 Environmental processes of interest to the DOE, such as bioremediation and carbon cycling, can be elucidated through the genomic lens of these unculturable microbes. However, contamination can occur at various stages of the single-cell sequencing process. Contaminated data can lead to wasted time and effort on meaningless analyses, inaccurate or erroneous conclusions, and pollution of public databases. A fully automated decontamination tool is necessary to prevent these instances and increase the throughput of the single-cell sequencing process

  12. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  13. Automated Energy Distribution and Reliability System: Validation Integration - Results of Future Architecture Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.

    2008-06-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.

  14. Semi-Automated Annotation of Biobank Data Using Standard Medical Terminologies in a Graph Database.

    Science.gov (United States)

    Hofer, Philipp; Neururer, Sabrina; Goebel, Georg

    2016-01-01

    Data describing biobank resources frequently contains unstructured free-text information or insufficient coding standards. (Bio-) medical ontologies like Orphanet Rare Diseases Ontology (ORDO) or the Human Disease Ontology (DOID) provide a high number of concepts, synonyms and entity relationship properties. Such standard terminologies increase quality and granularity of input data by adding comprehensive semantic background knowledge from validated entity relationships. Moreover, cross-references between terminology concepts facilitate data integration across databases using different coding standards. In order to encourage the use of standard terminologies, our aim is to identify and link relevant concepts with free-text diagnosis inputs within a biobank registry. Relevant concepts are selected automatically by lexical matching and SPARQL queries against a RDF triplestore. To ensure correctness of annotations, proposed concepts have to be confirmed by medical data administration experts before they are entered into the registry database. Relevant (bio-) medical terminologies describing diseases and phenotypes were identified and stored in a graph database which was tied to a local biobank registry. Concept recommendations during data input trigger a structured description of medical data and facilitate data linkage between heterogeneous systems.

  15. WEB-BASED DATABASE ON RENEWAL TECHNOLOGIES ...

    Science.gov (United States)

    As U.S. utilities continue to shore up their aging infrastructure, renewal needs now represent over 43% of annual expenditures compared to new construction for drinking water distribution and wastewater collection systems (Underground Construction [UC], 2016). An increased understanding of renewal options will ultimately assist drinking water utilities in reducing water loss and help wastewater utilities to address infiltration and inflow issues in a cost-effective manner. It will also help to extend the service lives of both drinking water and wastewater mains. This research effort involved collecting case studies on the use of various trenchless pipeline renewal methods and providing the information in an online searchable database. The overall objective was to further support technology transfer and information sharing regarding emerging and innovative renewal technologies for water and wastewater mains. The result of this research is a Web-based, searchable database that utility personnel can use to obtain technology performance and cost data, as well as case study references. The renewal case studies include: technologies used; the conditions under which the technology was implemented; costs; lessons learned; and utility contact information. The online database also features a data mining tool for automated review of the technologies selected and cost data. Based on a review of the case study results and industry data, several findings are presented on tren

  16. Understanding Is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System

    Science.gov (United States)

    Balfe, Nora; Sharples, Sarah; Wilson, John R.

    2018-01-01

    Objective: This paper aims to explore the role of factors pertaining to trust in real-world automation systems through the application of observational methods in a case study from the railway sector. Background: Trust in automation is widely acknowledged as an important mediator of automation use, but the majority of the research on automation trust is based on laboratory work. In contrast, this work explored trust in a real-world setting. Method: Experienced rail operators in four signaling centers were observed for 90 min, and their activities were coded into five mutually exclusive categories. Their observed activities were analyzed in relation to their reported trust levels, collected via a questionnaire. Results: The results showed clear differences in activity, even when circumstances on the workstations were very similar, and significant differences in some trust dimensions were found between groups exhibiting different levels of intervention and time not involved with signaling. Conclusion: Although the empirical, lab-based studies in the literature have consistently found that reliability and competence of the automation are the most important aspects of trust development, understanding of the automation emerged as the strongest dimension in this study. The implications are that development and maintenance of trust in real-world, safety-critical automation systems may be distinct from artificial laboratory automation. Application: The findings have important implications for emerging automation concepts in diverse industries including highly automated vehicles and Internet of things. PMID:29613815

  17. DMPD: IRAK1: a critical signaling mediator of innate immunity. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 17890055 IRAK1: a critical signaling mediator of innate immunity. Gottipati S, Rao ...IRAK1: a critical signaling mediator of innate immunity. PubmedID 17890055 Title IRAK1: a critical signaling mediator

  18. Automated detection of macular drusen using geometric background leveling and threshold selection.

    Science.gov (United States)

    Smith, R Theodore; Chan, Jackie K; Nagasaki, Takayuki; Ahmad, Umer F; Barbazetto, Irene; Sparrow, Janet; Figueroa, Marta; Merriam, Joanna

    2005-02-01

    Age-related macular degeneration (ARMD) is the most prevalent cause of visual loss in patients older than 60 years in the United States. Observation of drusen is the hallmark finding in the clinical evaluation of ARMD. To segment and quantify drusen found in patients with ARMD using image analysis and to compare the efficacy of image analysis segmentation with that of stereoscopic manual grading of drusen. Retrospective study. University referral center.Patients Photographs were randomly selected from an available database of patients with known ARMD in the ongoing Columbia University Macular Genetics Study. All patients were white and older than 60 years. Twenty images from 17 patients were selected as representative of common manifestations of drusen. Image preprocessing included automated color balancing and, where necessary, manual segmentation of confounding lesions such as geographic atrophy (3 images). The operator then chose among 3 automated processing options suggested by predominant drusen type. Automated processing consisted of elimination of background variability by a mathematical model and subsequent histogram-based threshold selection. A retinal specialist using a graphic tablet while viewing stereo pairs constructed digital drusen drawings for each image. The sensitivity and specificity of drusen segmentation using the automated method with respect to manual stereoscopic drusen drawings were calculated on a rigorous pixel-by-pixel basis. The median sensitivity and specificity of automated segmentation were 70% and 81%, respectively. After preprocessing and option choice, reproducibility of automated drusen segmentation was necessarily 100%. Automated drusen segmentation can be reliably performed on digital fundus photographs and result in successful quantification of drusen in a more precise manner than is traditionally possible with manual stereoscopic grading of drusen. With only minor preprocessing requirements, this automated detection

  19. Automation of the software production process for multiple cryogenic control applications

    OpenAIRE

    Fluder, Czeslaw; Lefebvre, Victor; Pezzetti, Marco; Plutecki, Przemyslaw; Tovar-González, Antonio; Wolak, Tomasz

    2018-01-01

    The development of process control systems for the cryogenic infrastructure at CERN is based on an automatic software generation approach. The overall complexity of the systems, their frequent evolution as well as the extensive use of databases, repositories, commercial engineering software and CERN frameworks have led to further efforts towards improving the existing automation based software production methodology. A large number of control system upgrades were successfully performed for th...

  20. Crew aiding and automation: A system concept for terminal area operations, and guidelines for automation design

    Science.gov (United States)

    Dwyer, John P.

    1994-01-01

    This research and development program comprised two efforts: the development of guidelines for the design of automated systems, with particular emphasis on automation design that takes advantage of contextual information, and the concept-level design of a crew aiding system, the Terminal Area Navigation Decision Aiding Mediator (TANDAM). This concept outlines a system capable of organizing navigation and communication information and assisting the crew in executing the operations required in descent and approach. In service of this endeavor, problem definition activities were conducted that identified terminal area navigation and operational familiarization exercises addressing the terminal area navigation problem. Both airborne and ground-based (ATC) elements of aircraft control were extensively researched. The TANDAM system concept was then specified, and the crew interface and associated systems described. Additionally, three descent and approach scenarios were devised in order to illustrate the principal functions of the TANDAM system concept in relation to the crew, the aircraft, and ATC. A plan for the evaluation of the TANDAM system was established. The guidelines were developed based on reviews of relevant literature, and on experience gained in the design effort.

  1. Automated ocean color product validation for the Southern California Bight

    Science.gov (United States)

    Davis, Curtiss O.; Tufillaro, Nicholas; Jones, Burt; Arnone, Robert

    2012-06-01

    Automated match ups allow us to maintain and improve the products of current satellite ocean color sensors (MODIS, MERIS), and new sensors (VIIRS). As part of the VIIRS mission preparation, we have created a web based automated match up tool that provides access to searchable fields for date, site, and products, and creates match-ups between satellite (MODIS, MERIS, VIIRS), and in-situ measurements (HyperPRO and SeaPRISM). The back end of the system is a 'mySQL' database, and the front end is a `php' web portal with pull down menus for searchable fields. Based on selections, graphics are generated showing match-ups and statistics, and ascii files are created for downloads for the matchup data. Examples are shown for matching the satellite data with the data from Platform Eureka SeaPRISM off L.A. Harbor in the Southern California Bight.

  2. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  3. TaxMan: a taxonomic database manager

    Directory of Open Access Journals (Sweden)

    Blaxter Mark

    2006-12-01

    Full Text Available Abstract Background Phylogenetic analysis of large, multiple-gene datasets, assembled from public sequence databases, is rapidly becoming a popular way to approach difficult phylogenetic problems. Supermatrices (concatenated multiple sequence alignments of multiple genes can yield more phylogenetic signal than individual genes. However, manually assembling such datasets for a large taxonomic group is time-consuming and error-prone. Additionally, sequence curation, alignment and assessment of the results of phylogenetic analysis are made particularly difficult by the potential for a given gene in a given species to be unrepresented, or to be represented by multiple or partial sequences. We have developed a software package, TaxMan, that largely automates the processes of sequence acquisition, consensus building, alignment and taxon selection to facilitate this type of phylogenetic study. Results TaxMan uses freely available tools to allow rapid assembly, storage and analysis of large, aligned DNA and protein sequence datasets for user-defined sets of species and genes. The user provides GenBank format files and a list of gene names and synonyms for the loci to analyse. Sequences are extracted from the GenBank files on the basis of annotation and sequence similarity. Consensus sequences are built automatically. Alignment is carried out (where possible, at the protein level and aligned sequences are stored in a database. TaxMan can automatically determine the best subset of taxa to examine phylogeny at a given taxonomic level. By using the stored aligned sequences, large concatenated multiple sequence alignments can be generated rapidly for a subset and output in analysis-ready file formats. Trees resulting from phylogenetic analysis can be stored and compared with a reference taxonomy. Conclusion TaxMan allows rapid automated assembly of a multigene datasets of aligned sequences for large taxonomic groups. By extracting sequences on the basis of

  4. National Automated Surveillance of Hospital-Acquired Bacteremia in Denmark Using a Computer Algorithm

    DEFF Research Database (Denmark)

    Gubbels, Sophie; Nielsen, Jens; Voldstedlund, Marianne

    2017-01-01

    BACKGROUND In 2015, Denmark launched an automated surveillance system for hospital-acquired infections, the Hospital-Acquired Infections Database (HAIBA). OBJECTIVE To describe the algorithm used in HAIBA, to determine its concordance with point prevalence surveys (PPSs), and to present trends...... advantages of automated surveillance, HAIBA allows monitoring of HA bacteremia across the healthcare system, supports prioritizing preventive measures, and holds promise for evaluating interventions. Infect Control Hosp Epidemiol 2017;1-8....... for hospital-acquired bacteremia SETTING Private and public hospitals in Denmark METHODS A hospital-acquired bacteremia case was defined as at least 1 positive blood culture with at least 1 pathogen (bacterium or fungus) taken between 48 hours after admission and 48 hours after discharge, using the Danish...

  5. Towards automated visual flexible endoscope navigation.

    Science.gov (United States)

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  6. Literature classification for semi-automated updating of biological knowledgebases

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Winther, Ole

    2013-01-01

    abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature. Conclusion: We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining...... types of biological data, such as sequence data, are extensively stored in biological databases, functional annotations, such as immunological epitopes, are found primarily in semi-structured formats or free text embedded in primary scientific literature. Results: We defined and applied a machine...

  7. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  8. Human grasping database for activities of daily living with depth, color and kinematic data streams.

    Science.gov (United States)

    Saudabayev, Artur; Rysbek, Zhanibek; Khassenova, Raykhan; Varol, Huseyin Atakan

    2018-05-29

    This paper presents a grasping database collected from multiple human subjects for activities of daily living in unstructured environments. The main strength of this database is the use of three different sensing modalities: color images from a head-mounted action camera, distance data from a depth sensor on the dominant arm and upper body kinematic data acquired from an inertial motion capture suit. 3826 grasps were identified in the data collected during 9-hours of experiments. The grasps were grouped according to a hierarchical taxonomy into 35 different grasp types. The database contains information related to each grasp and associated sensor data acquired from the three sensor modalities. We also provide our data annotation software written in Matlab as an open-source tool. The size of the database is 172 GB. We believe this database can be used as a stepping stone to develop big data and machine learning techniques for grasping and manipulation with potential applications in rehabilitation robotics and intelligent automation.

  9. On the combination of molecular replacement and single-wavelength anomalous diffraction phasing for automated structure determination

    International Nuclear Information System (INIS)

    Panjikar, Santosh; Parthasarathy, Venkataraman; Lamzin, Victor S.; Weiss, Manfred S.; Tucker, Paul A.

    2009-01-01

    The combination of molecular replacement and single-wavelength anomalous diffraction improves the performance of automated structure determination with Auto-Rickshaw. A combination of molecular replacement and single-wavelength anomalous diffraction phasing has been incorporated into the automated structure-determination platform Auto-Rickshaw. The complete MRSAD procedure includes molecular replacement, model refinement, experimental phasing, phase improvement and automated model building. The improvement over the standard SAD or MR approaches is illustrated by ten test cases taken from the JCSG diffraction data-set database. Poor MR or SAD phases with phase errors larger than 70° can be improved using the described procedure and a large fraction of the model can be determined in a purely automatic manner from X-ray data extending to better than 2.6 Å resolution

  10. An automated system designed for large scale NMR data deposition and annotation: application to over 600 assigned chemical shift data entries to the BioMagResBank from the Riken Structural Genomics/Proteomics Initiative internal database

    International Nuclear Information System (INIS)

    Kobayashi, Naohiro; Harano, Yoko; Tochio, Naoya; Nakatani, Eiichi; Kigawa, Takanori; Yokoyama, Shigeyuki; Mading, Steve; Ulrich, Eldon L.; Markley, John L.; Akutsu, Hideo; Fujiwara, Toshimichi

    2012-01-01

    Biomolecular NMR chemical shift data are key information for the functional analysis of biomolecules and the development of new techniques for NMR studies utilizing chemical shift statistical information. Structural genomics projects are major contributors to the accumulation of protein chemical shift information. The management of the large quantities of NMR data generated by each project in a local database and the transfer of the data to the public databases are still formidable tasks because of the complicated nature of NMR data. Here we report an automated and efficient system developed for the deposition and annotation of a large number of data sets including 1 H, 13 C and 15 N resonance assignments used for the structure determination of proteins. We have demonstrated the feasibility of our system by applying it to over 600 entries from the internal database generated by the RIKEN Structural Genomics/Proteomics Initiative (RSGI) to the public database, BioMagResBank (BMRB). We have assessed the quality of the deposited chemical shifts by comparing them with those predicted from the PDB coordinate entry for the corresponding protein. The same comparison for other matched BMRB/PDB entries deposited from 2001–2011 has been carried out and the results suggest that the RSGI entries greatly improved the quality of the BMRB database. Since the entries include chemical shifts acquired under strikingly similar experimental conditions, these NMR data can be expected to be a promising resource to improve current technologies as well as to develop new NMR methods for protein studies.

  11. CyMSatDB: The Globe Artichoke (Cynara cardunculus var. scolymus) Microsatellite Database

    DEFF Research Database (Denmark)

    Portis, Ezio; Portis, Flavio; Valente, Luisa

    2015-01-01

    and for the construction of the first microsatellite marker database CyMSatDB (Cynara cardunculusMicroSatellite DataBase). Both perfect and compound SSRs were in-silico mined using the SciRoKo SSR-search module (http://kofler.or.at/bioinformatics/SciRoKo). On the whole, about 295,000 SSR motifs were identified which also...... Kbp), which represent 725 Mb of genomic sequence. Scaffolds were genetically anchored using a low-coverage genotyping by sequencing (GBS) of a mapping population and 17 pseudomolecules reconstructed. Peudomolecules as well as unmapped scaffolds were used for the bulk mining of SSR markers...... in a MySQL database and provides an effective and responsive interface developed in PHP. To cater the customized needs of wet lab, features with a novelty of an automated primer designing tool is added. The feature of user defined primer designing has great advantage in terms of precise selection from...

  12. Effect modification, interaction and mediation: an overview of theoretical insights for clinical investigators

    Directory of Open Access Journals (Sweden)

    Corraini P

    2017-06-01

    Full Text Available Priscila Corraini,1 Morten Olsen,1 Lars Pedersen,1 Olaf M Dekkers,1,2 Jan P Vandenbroucke1–3 1Department of Clinical Epidemiology, Institute of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark; 2Leiden University Medical Center, Leiden, the Netherlands; 3Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK Abstract: We revisited the three interrelated epidemiological concepts of effect modification, interaction and mediation for clinical investigators and examined their applicability when using research databases. The standard methods that are available to assess interaction, effect modification and mediation are explained and exemplified. For each concept, we first give a simple “best-case” example from a randomized controlled trial, followed by a structurally similar example from an observational study using research databases. Our explanation of the examples is based on recent theoretical developments and insights in the context of large health care databases. Terminology is sometimes ambiguous for what constitutes effect modification and interaction. The strong assumptions underlying the assessment of interaction, and particularly mediation, require clinicians and epidemiologists to take extra care when conducting observational studies in the context of health care databases. These strong assumptions may limit the applicability of interaction and mediation assessments, at least until the biases and limitations of these assessments when using large research databases are clarified. Keywords: methods, epidemiology, effect modifiers, stratified analyses, health care administrative claims

  13. Transfusion management using a remote-controlled, automated blood storage.

    Science.gov (United States)

    Pagliaro, Pasqualepaolo; Turdo, Rosalia

    2008-04-01

    Generally, the safety of transfusion terapies for patients depends in part on the distribution of the blood products. The prevention of adverse events can be aided by technological means, which, besides improving the traceability of the process, make errors less likely. In this context, the latest frontier in automation and computerisation is the remote-controlled, automated refrigerator for blood storage. Computer cross-matching is an efficient and safe method for assigning blood components, based on Information Technology applied to typing and screening. This method can be extended to the management of an automated blood refrigerator, the programme of which is interfaced with the Transfusion Service's information system. The connection we made in our Service between EmoNet and Hemosafe enables real-time, remote-controlled management of the following aspects of blood component distribution: a) release of autologous and allogeneic units already allocated to a patient, b) release of available units, which can be allocated by remote-control to known patients, in the presence of a valid computer cross-match, c) release of O-negative units of blood for emergencies. Our system combines an information database, which enables computer cross-matching, with an automated refrigerator for blood storage with controlled access managed remotely by the Transfusion Service. The effectiveness and safety of the system were validated during the 4 months of its routine use in the Transfusion Service's outpatient department. The safety and efficiency of the distribution of blood products can and must be increased by the use of technological innovations. With the EmoNet/Hemosafe system, the responsibility for the remote-controlled distribution of red blood cell concentrates remains with the chief of the Transfusion Services, through the use of automated computer procedures and supported by continuous training of technicians and nursing staff.

  14. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  15. A users guide to the radioactive waste inventory program 'DATABASE'

    International Nuclear Information System (INIS)

    Prince, G.A.; Tymons, B.J.

    1984-09-01

    The program computes an inventory of radioactive waste accumulation in a form such that a variety of interrogation, collation and summary functions can be used to access the stored information. Database is designed to be used online, that is the user will communicate with it at a computer terminal, and the program will display the result of each enquiry. It has the potential to automate much of the searching and adding up which was involved in compiling and using waste inventories in the past. (author)

  16. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  17. An integrative clinical database and diagnostics platform for biomarker identification and analysis in ion mobility spectra of human exhaled air

    DEFF Research Database (Denmark)

    Schneider, Till; Hauschild, Anne-Christin; Baumbach, Jörg Ingo

    2013-01-01

    data integration and semi-automated data analysis, in particular with regard to the rapid data accumulation, emerging from the high-throughput nature of the MCC/IMS technology. Here, we present a comprehensive database application and analysis platform, which combines metabolic maps with heterogeneous...... biomedical data in a well-structured manner. The design of the database is based on a hybrid of the entity-attribute-value (EAV) model and the EAV-CR, which incorporates the concepts of classes and relationships. Additionally it offers an intuitive user interface that provides easy and quick access...... to have a clear understanding of the detailed composition of human breath. Therefore, in addition to the clinical studies, there is a need for a flexible and comprehensive centralized data repository, which is capable of gathering all kinds of related information. Moreover, there is a demand for automated...

  18. Knowledge discovery in databases of biomechanical variables: application to the sit to stand motor task

    Directory of Open Access Journals (Sweden)

    Benvenuti Francesco

    2004-10-01

    Full Text Available Abstract Background The interpretation of data obtained in a movement analysis laboratory is a crucial issue in clinical contexts. Collection of such data in large databases might encourage the use of modern techniques of data mining to discover additional knowledge with automated methods. In order to maximise the size of the database, simple and low-cost experimental set-ups are preferable. The aim of this study was to extract knowledge inherent in the sit-to-stand task as performed by healthy adults, by searching relationships among measured and estimated biomechanical quantities. An automated method was applied to a large amount of data stored in a database. The sit-to-stand motor task was already shown to be adequate for determining the level of individual motor ability. Methods The technique of search for association rules was chosen to discover patterns as part of a Knowledge Discovery in Databases (KDD process applied to a sit-to-stand motor task observed with a simple experimental set-up and analysed by means of a minimum measured input model. Selected parameters and variables of a database containing data from 110 healthy adults, of both genders and of a large range of age, performing the task were considered in the analysis. Results A set of rules and definitions were found characterising the patterns shared by the investigated subjects. Time events of the task turned out to be highly interdependent at least in their average values, showing a high level of repeatability of the timing of the performance of the task. Conclusions The distinctive patterns of the sit-to-stand task found in this study, associated to those that could be found in similar studies focusing on subjects with pathologies, could be used as a reference for the functional evaluation of specific subjects performing the sit-to-stand motor task.

  19. Toward the automated generation of genome-scale metabolic networks in the SEED.

    Science.gov (United States)

    DeJongh, Matthew; Formsma, Kevin; Boillot, Paul; Gould, John; Rycenga, Matthew; Best, Aaron

    2007-04-26

    Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis). We have implemented our tools and database within the SEED, an open-source software environment for comparative genome annotation and analysis. Our method sets the

  20. Toward the automated generation of genome-scale metabolic networks in the SEED

    Directory of Open Access Journals (Sweden)

    Gould John

    2007-04-01

    Full Text Available Abstract Background Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. Results We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis. We have implemented our tools and database within the SEED, an open-source software environment for comparative

  1. Neuroscience data integration through mediation: An (FBIRN case study

    Directory of Open Access Journals (Sweden)

    Naveen Ashish

    2010-12-01

    Full Text Available We describe an application of the BIRN mediator to the integration of neuroscience experimental data sources. The BIRN mediator is a general purpose solution to the problem of providing integrated, semantically-consistent access to biomedical data from multiple, distributed, heterogeneous data sources. The system follows the mediation approach, where the data remains at the sources, providers maintain control of the data, and the integration system retrieves data from the sources in real-time in response to client queries. Our aim with this paper is to illustrate how domain-specific data integration applications can be developed quickly and in a principled way by using our general mediation technology. We describe in detail the integration of two leading, but radically different, experimental neuroscience sources, namely, the Human Imaging Database (HID, a relational database, and the eXtensible Neuroimaging Archive Toolkit (XNAT, an XML web services system. We discuss the steps, sources of complexity, effort and time required to build such applications, as well as outline directions of ongoing and future research on biomedical data integration.

  2. The use of database management systems and artificial intelligence in automating the planning of optical navigation pictures

    Science.gov (United States)

    Davis, Robert P.; Underwood, Ian M.

    1987-01-01

    The use of database management systems (DBMS) and AI to minimize human involvement in the planning of optical navigation pictures for interplanetary space probes is discussed, with application to the Galileo mission. Parameters characterizing the desirability of candidate pictures, and the program generating them, are described. How these parameters automatically build picture records in a database, and the definition of the database structure, are then discussed. The various rules, priorities, and constraints used in selecting pictures are also described. An example is provided of an expert system, written in Prolog, for automatically performing the selection process.

  3. Toward automated interpretation of integrated information: Managing "big data" for NDE

    Science.gov (United States)

    Gregory, Elizabeth; Lesthaeghe, Tyler; Holland, Stephen

    2015-03-01

    Large scale automation of NDE processes is rapidly maturing, thanks to recent improvements in robotics and the rapid growth of computer power over the last twenty years. It is fairly straightforward to automate NDE data collection itself, but the process of NDE remains largely manual. We will discuss three threads of technological needs that must be addressed before we are able to perform automated NDE. Spatial context, the first thread, means that each NDE measurement taken is accompanied by metadata that locates the measurement with respect to the 3D physical geometry of the specimen. In this way, the geometry of the specimen acts as a database key. Data context, the second thread, means that we record why the data was taken and how it was measured in addition to the NDE data itself. We will present our software tool that helps users interact with data in context, Databrowse. Condition estimation, the third thread, is maintaining the best possible knowledge of the condition (serviceability, degradation, etc.) of an object or part. In the NDE context, we can prospectively use Bayes' Theorem to integrate the data from each new NDE measurement with prior knowledge. These tools, combined with robotic measurements and automated defect analysis, will provide the information needed to make high-level life predictions and focus NDE measurements where they are needed most.

  4. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  5. AIRSAR Automated Web-based Data Processing and Distribution System

    Science.gov (United States)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  6. Mediation analysis allowing for exposure-mediator interactions and causal interpretation: theoretical assumptions and implementation with SAS and SPSS macros.

    Science.gov (United States)

    Valeri, Linda; Vanderweele, Tyler J

    2013-06-01

    Mediation analysis is a useful and widely employed approach to studies in the field of psychology and in the social and biomedical sciences. The contributions of this article are several-fold. First we seek to bring the developments in mediation analysis for nonlinear models within the counterfactual framework to the psychology audience in an accessible format and compare the sorts of inferences about mediation that are possible in the presence of exposure-mediator interaction when using a counterfactual versus the standard statistical approach. Second, the work by VanderWeele and Vansteelandt (2009, 2010) is extended here to allow for dichotomous mediators and count outcomes. Third, we provide SAS and SPSS macros to implement all of these mediation analysis techniques automatically, and we compare the types of inferences about mediation that are allowed by a variety of software macros. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  7. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  8. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  9. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  10. Early detection of pharmacovigilance signals with automated methods based on false discovery rates: a comparative study.

    Science.gov (United States)

    Ahmed, Ismaïl; Thiessard, Frantz; Miremont-Salamé, Ghada; Haramburu, Françoise; Kreft-Jais, Carmen; Bégaud, Bernard; Tubert-Bitter, Pascale

    2012-06-01

    Improving the detection of drug safety signals has led several pharmacovigilance regulatory agencies to incorporate automated quantitative methods into their spontaneous reporting management systems. The three largest worldwide pharmacovigilance databases are routinely screened by the lower bound of the 95% confidence interval of proportional reporting ratio (PRR₀₂.₅), the 2.5% quantile of the Information Component (IC₀₂.₅) or the 5% quantile of the Gamma Poisson Shrinker (GPS₀₅). More recently, Bayesian and non-Bayesian False Discovery Rate (FDR)-based methods were proposed that address the arbitrariness of thresholds and allow for a built-in estimate of the FDR. These methods were also shown through simulation studies to be interesting alternatives to the currently used methods. The objective of this work was twofold. Based on an extensive retrospective study, we compared PRR₀₂.₅, GPS₀₅ and IC₀₂.₅ with two FDR-based methods derived from the Fisher's exact test and the GPS model (GPS(pH0) [posterior probability of the null hypothesis H₀ calculated from the Gamma Poisson Shrinker model]). Secondly, restricting the analysis to GPS(pH0), we aimed to evaluate the added value of using automated signal detection tools compared with 'traditional' methods, i.e. non-automated surveillance operated by pharmacovigilance experts. The analysis was performed sequentially, i.e. every month, and retrospectively on the whole French pharmacovigilance database over the period 1 January 1996-1 July 2002. Evaluation was based on a list of 243 reference signals (RSs) corresponding to investigations launched by the French Pharmacovigilance Technical Committee (PhVTC) during the same period. The comparison of detection methods was made on the basis of the number of RSs detected as well as the time to detection. Results comparing the five automated quantitative methods were in favour of GPS(pH0) in terms of both number of detections of true signals and

  11. Knowledge base technology for CT-DIMS: Report 1. [CT-DIMS (Cutting Tool - Database and Information Management System)

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, E.E.

    1993-05-01

    This report discusses progress on the Cutting Tool-Database and Information Management System (CT-DIMS) project being conducted by the University of Illinois Urbana-Champaign (UIUC) under contract to the Department of Energy. This project was initiated in October 1991 by UIUC. The Knowledge-Based Engineering Systems Research Laboratory (KBESRL) at UIUC is developing knowledge base technology and prototype software for the presentation and manipulation of the cutting tool databases at Allied-Signal Inc., Kansas City Division (KCD). The graphical tool selection capability being developed for CT-DIMS in the Intelligent Design Environment for Engineering Automation (IDEEA) will provide a concurrent environment for simultaneous access to tool databases, tool standard libraries, and cutting tool knowledge.

  12. ANALYTICAL ISSUES OF RISK COMMUNICATION. RATIONALE FOR APPROACHES TO DEVELOPING RESEARCH DATABASES ON RADIATION SAFETY AND SOCIAL RISKS

    Directory of Open Access Journals (Sweden)

    L. S. Rekhtina

    2017-01-01

    Full Text Available One of the important stages of risk communication is the analysis of publications in traditional media and the Internet, which largely shape people’s attitudes to various issues. At the same time, the availability of large amounts of information relating to any subject area complicates the possibility of manual analysis and adequate description of all of the information. On the other hand, the availability of information causes the urgency of developing methods to improve the effectiveness of its analysis. One way to automate the analysis of large amounts of information is the development of databases or automated information systems containing information materials on the subject matter under study and suggesting the possibility of automated processing. The objective of this work is to analyze the experience of developing such systems and databases by the research teams of the St. Petersburg Institute of Radiation Hygiene and St. Petersburg State University and to identify key features of the use of bases Data for social research. The results of the analysis showed that the methodological approaches used were very close. The analysis is performed according to the method of autoethnographical research. The strategy application of the comparative analysis allows identifying common features characterizing the situation of development and implementation of databases to practice of the risk communication studies. The article discusses the features associated with them, the limitations of the primary data, such as text, discursive nature of most of the materials, information noise, high dependence on context, variability, different structure, format and appearance of materials. The important parameters for solving problems of the qualitative and quantitative analysis are given in the article. An important condition of creating effective, from the point of view of socio-communication studies information system is to implement the processing

  13. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  14. Analysis of large brain MRI databases for investigating the relationships between brain, cognitive, and genetic polymorphisms

    International Nuclear Information System (INIS)

    Mazoyer, B.

    2006-01-01

    A major challenge for the years to come is the understanding of the brain-behaviour relationships, and in particular the investigation and quantification of the impact of genetic polymorphism on these relationships. In this framework, a promising experimental approach, which we will refer to as neuro-epidemiologic imaging, consists in acquiring multimodal (brain images, psychometric an d sociological data, genotypes) data in large (several hundreds or thousands ) cohorts of subjects. Processing of such large databases requires on first place the conception and implementation of automated 'pipelines', including image registration, spatial normalisation tissue segmentation, and multivariate statistical analysis. Given the number of images and data to be processed, such pipelines must be both fully automated and robust enough to be able to handle multi-center MRI data, e.g. having inhomogeneous characteristics in terms of resolution and contrast. This approach will be illustrated using two databases collected in aged healthy subjects, searching for the impact of genetic and environmental on two markers of brain aging, namely white matter hyper-signals, and grey matter atrophy. (author)

  15. Smart Health Surveillance with Automated Database Using Android Mobile Device

    Directory of Open Access Journals (Sweden)

    Anitha Karthi

    2017-08-01

    Full Text Available ABSTRACT Smart Health Surveillance system is to measure and display the Electrocardiogram (ECG and temperature of patient’s body continuously and also to communicate to the doctor. The system measures ECG using infrared sensor and the temperatures at oral and wrist of the patient using temperature sensors. Microcontroller, receives the data from the sensors, displays the same and communicates to the web server automatically. In the existing system, patient’s vital parameters are obtained and the obtained values are entered into database and then uploaded into a web-based server manually. The existing system has no alert signal, during abnormal condition to the surrounding and to the doctor. The proposed system consists of a visualization module of the server program, which graphically displays the recorded biomedical signals on android mobile devices used by doctors at the receiver end. It also gives a buzzer or an alarm in case of abnormal condition of the patient.

  16. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  17. An Automated Ab Initio Framework for Identifying New Ferroelectrics

    Science.gov (United States)

    Smidt, Tess; Reyes-Lillo, Sebastian E.; Jain, Anubhav; Neaton, Jeffrey B.

    Ferroelectric materials have a wide-range of technological applications including non-volatile RAM and optoelectronics. In this work, we present an automated first-principles search for ferroelectrics. We integrate density functional theory, crystal structure databases, symmetry tools, workflow software, and a custom analysis toolkit to build a library of known and proposed ferroelectrics. We screen thousands of candidates using symmetry relations between nonpolar and polar structure pairs. We use two search strategies 1) polar-nonpolar pairs with the same composition and 2) polar-nonpolar structure type pairs. Results are automatically parsed, stored in a database, and accessible via a web interface showing distortion animations and plots of polarization and total energy as a function of distortion. We benchmark our results against experimental data, present new ferroelectric candidates found through our search, and discuss future work on expanding this search methodology to other material classes such as anti-ferroelectrics and multiferroics.

  18. DMPD: Signalling pathways mediating type I interferon gene expression. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 17904888 Signalling pathways mediating type I interferon gene expression. Edwards M...hways mediating type I interferon gene expression. PubmedID 17904888 Title Signalling pathways...R, Slater L, Johnston SL. Microbes Infect. 2007 Sep;9(11):1245-51. Epub 2007 Jul 1. (.png) (.svg) (.html) (.csml) Show Signalling pat

  19. DMPD: Modulation of Toll-interleukin 1 receptor mediated signaling. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 15662540 Modulation of Toll-interleukin 1 receptor mediated signaling. Li X, Qin J.... J Mol Med. 2005 Apr;83(4):258-66. Epub 2005 Jan 21. (.png) (.svg) (.html) (.csml) Show Modulation of Toll-i...nterleukin 1 receptor mediated signaling. PubmedID 15662540 Title Modulation of Toll-interleukin 1 receptor

  20. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  1. Lean automation development : applying lean principles to the automation development process

    OpenAIRE

    Granlund, Anna; Wiktorsson, Magnus; Grahn, Sten; Friedler, Niklas

    2014-01-01

    By a broad empirical study it is indicated that automation development show potential of improvement. In the paper, 13 lean product development principles are contrasted to the automation development process and it is suggested why and how these principles can facilitate, support and improve the automation development process. The paper summarises a description of what characterises a lean automation development process and what consequences it entails. Main differences compared to current pr...

  2. Automated system for individual control for admission of staff in the controlled zones of the Kozloduy NPP

    International Nuclear Information System (INIS)

    Avramski, D.; Jordanova, V.

    2008-01-01

    The development of the system leads to the following: automated registering of the staff in the personnel database; real time reading of the personal dosimeters; real time reading of the collective dose; the control over the working people (especially external) concerning the dose limits or restrictions are performed in real time

  3. Automated mapping of persistent ice and snow cover across the western U.S. with Landsat

    Science.gov (United States)

    Selkowitz, David J.; Forster, Richard R.

    2016-01-01

    We implemented an automated approach for mapping persistent ice and snow cover (PISC) across the conterminous western U.S. using all available Landsat TM and ETM+ scenes acquired during the late summer/early fall period between 2010 and 2014. Two separate validation approaches indicate this dataset provides a more accurate representation of glacial ice and perennial snow cover for the region than either the U.S. glacier database derived from US Geological Survey (USGS) Digital Raster Graphics (DRG) maps (based on aerial photography primarily from the 1960s–1980s) or the National Land Cover Database 2011 perennial ice and snow cover class. Our 2010–2014 Landsat-derived dataset indicates 28% less glacier and perennial snow cover than the USGS DRG dataset. There are larger differences between the datasets in some regions, such as the Rocky Mountains of Northwest Wyoming and Southwest Montana, where the Landsat dataset indicates 54% less PISC area. Analysis of Landsat scenes from 1987–1988 and 2008–2010 for three regions using a more conventional, semi-automated approach indicates substantial decreases in glaciers and perennial snow cover that correlate with differences between PISC mapped by the USGS DRG dataset and the automated Landsat-derived dataset. This suggests that most of the differences in PISC between the USGS DRG and the Landsat-derived dataset can be attributed to decreases in PISC, as opposed to differences between mapping techniques. While the dataset produced by the automated Landsat mapping approach is not designed to serve as a conventional glacier inventory that provides glacier outlines and attribute information, it allows for an updated estimate of PISC for the conterminous U.S. as well as for smaller regions. Additionally, the new dataset highlights areas where decreases in PISC have been most significant over the past 25–50 years.

  4. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    Science.gov (United States)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  5. Automated delineation of stroke lesions using brain CT images

    Directory of Open Access Journals (Sweden)

    Céline R. Gillebert

    2014-01-01

    Full Text Available Computed tomographic (CT images are widely used for the identification of abnormal brain tissue following infarct and hemorrhage in stroke. Manual lesion delineation is currently the standard approach, but is both time-consuming and operator-dependent. To address these issues, we present a method that can automatically delineate infarct and hemorrhage in stroke CT images. The key elements of this method are the accurate normalization of CT images from stroke patients into template space and the subsequent voxelwise comparison with a group of control CT images for defining areas with hypo- or hyper-intense signals. Our validation, using simulated and actual lesions, shows that our approach is effective in reconstructing lesions resulting from both infarct and hemorrhage and yields lesion maps spatially consistent with those produced manually by expert operators. A limitation is that, relative to manual delineation, there is reduced sensitivity of the automated method in regions close to the ventricles and the brain contours. However, the automated method presents a number of benefits in terms of offering significant time savings and the elimination of the inter-operator differences inherent to manual tracing approaches. These factors are relevant for the creation of large-scale lesion databases for neuropsychological research. The automated delineation of stroke lesions from CT scans may also enable longitudinal studies to quantify changes in damaged tissue in an objective and reproducible manner.

  6. Thoughts toward a clinical database of architecture: evidence, complexity, and impact

    Directory of Open Access Journals (Sweden)

    Leonard R. Bachman

    2012-10-01

    Full Text Available This paper examines how architecture is building a clinical database similar to that of law and medicine and is developing this database for the purposes of acquiring complex design insight. This emerging clinical branch of architectural knowledge exceeds the scope of everyday experience of physical form and can thus be shown to enable a more satisfying scale of design thinking. It is argued that significant transformational kinds of professional transparency and accountability are thus intensifying. The tactics and methods of this paper are to connect previously disparate historical and contemporary events that mark the evolution of this database and then to fold those events into an explanatory narrative concerning clinical design practice. Beginning with architecture’s use of precedent (Collins 1971, the formulation of design as complex problems (Rittel and Webber 1973, high performance buildings to meet the crisis of climate change, social mandates of postindustrial society (Bell 1973, and other roots of evidence, the paper then elaborates the themes in which this database is evolving. Such themes include post-occupancy evaluation (Bordass and Leaman 2005, continuous commissioning, performance simulation, digital instrumentation, automation, and other modes of data collection in buildings. Finally, the paper concludes with some anticipated impacts that such a clinical database might have on design practice and how their benefits can be achieved through new interdisciplinary relations between academia and practice.

  7. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  8. Automated Identification of Diabetic Retinopathy Using Deep Learning.

    Science.gov (United States)

    Gargeya, Rishab; Leng, Theodore

    2017-07-01

    Diabetic retinopathy (DR) is one of the leading causes of preventable blindness globally. Performing retinal screening examinations on all diabetic patients is an unmet need, and there are many undiagnosed and untreated cases of DR. The objective of this study was to develop robust diagnostic technology to automate DR screening. Referral of eyes with DR to an ophthalmologist for further evaluation and treatment would aid in reducing the rate of vision loss, enabling timely and accurate diagnoses. We developed and evaluated a data-driven deep learning algorithm as a novel diagnostic tool for automated DR detection. The algorithm processed color fundus images and classified them as healthy (no retinopathy) or having DR, identifying relevant cases for medical referral. A total of 75 137 publicly available fundus images from diabetic patients were used to train and test an artificial intelligence model to differentiate healthy fundi from those with DR. A panel of retinal specialists determined the ground truth for our data set before experimentation. We also tested our model using the public MESSIDOR 2 and E-Ophtha databases for external validation. Information learned in our automated method was visualized readily through an automatically generated abnormality heatmap, highlighting subregions within each input fundus image for further clinical review. We used area under the receiver operating characteristic curve (AUC) as a metric to measure the precision-recall trade-off of our algorithm, reporting associated sensitivity and specificity metrics on the receiver operating characteristic curve. Our model achieved a 0.97 AUC with a 94% and 98% sensitivity and specificity, respectively, on 5-fold cross-validation using our local data set. Testing against the independent MESSIDOR 2 and E-Ophtha databases achieved a 0.94 and 0.95 AUC score, respectively. A fully data-driven artificial intelligence-based grading algorithm can be used to screen fundus photographs obtained

  9. An automated procedure for the assessment of white matter hyperintensities by multispectral (T1, T2, PD) MRI and an evaluation of its between-centre reproducibility based on two large community databases

    International Nuclear Information System (INIS)

    Maillard, Pauline; Delcroix, Nicolas; Crivello, Fabrice; Gicquel, Sebastien; Joliot, Marc; Tzourio-Mazoyer, Nathalie; Dufouil, Carole; Alperovitch, Annick; Tzourio, Christophe; Mazoyer, Bernard

    2008-01-01

    An automated procedure for the detection, quantification, localization and statistical mapping of white matter hyperintensities (WMH) on T2-weighted magnetic resonance (MR) images is presented and validated based on the results of a between-centre reproducibility study. The first step is the identification of white matter (WM) tissue using a multispectral (T1, T2, PD) segmentation. In a second step, WMH are identified within the WM tissue by segmenting T2 images, isolating two different classes of WMH voxels - low- and high-contrast WMH voxels, respectively. The reliability of the whole procedure was assessed by applying it to the analysis of two large MR imaging databases (n = 650 and n710, respectively) of healthy elderly subjects matched for demographic characteristics. Average overall WMH load and spatial distribution were found to be similar in the two samples, (1.81 and 1.79% of the WM volume, respectively). White matter hyperintensity load was found to be significantly associated with both age and high blood pressure, with similar effects in both samples. With specific reference to the 650 subject cohort, we also found that WMH load provided by this automated procedure was significantly associated with visual grading of the severity of WMH, as assessed by a trained neurologist. The results show that this method is sensitive, well correlated with semi-quantitative visual rating and highly reproducible. (orig.)

  10. An automated procedure for the assessment of white matter hyperintensities by multispectral (T1, T2, PD) MRI and an evaluation of its between-centre reproducibility based on two large community databases

    Energy Technology Data Exchange (ETDEWEB)

    Maillard, Pauline; Delcroix, Nicolas; Crivello, Fabrice; Gicquel, Sebastien; Joliot, Marc; Tzourio-Mazoyer, Nathalie [GIP Cyceron, Centre d' Imagerie-Neurosciences et Applications aux Pathologies, CI-NAPS, CNRS, CEA, Universite de Caen/Universite Paris Descartes, Boulevard Becquerel, BP 5229, Caen (France); Dufouil, Carole; Alperovitch, Annick; Tzourio, Christophe [Universite Pierre et Marie Curie, INSERM U708, Neuroepidemiologie, Paris (France); Mazoyer, Bernard [GIP Cyceron, Centre d' Imagerie-Neurosciences et Applications aux Pathologies, CI-NAPS, CNRS, CEA, Universite de Caen/Universite Paris Descartes, Boulevard Becquerel, BP 5229, Caen (France); Institut Universitaire de France, Paris (France); CHU du Caen, Unite IRM, Caen (France)

    2008-01-15

    An automated procedure for the detection, quantification, localization and statistical mapping of white matter hyperintensities (WMH) on T2-weighted magnetic resonance (MR) images is presented and validated based on the results of a between-centre reproducibility study. The first step is the identification of white matter (WM) tissue using a multispectral (T1, T2, PD) segmentation. In a second step, WMH are identified within the WM tissue by segmenting T2 images, isolating two different classes of WMH voxels - low- and high-contrast WMH voxels, respectively. The reliability of the whole procedure was assessed by applying it to the analysis of two large MR imaging databases (n = 650 and n= 710, respectively) of healthy elderly subjects matched for demographic characteristics. Average overall WMH load and spatial distribution were found to be similar in the two samples, (1.81 and 1.79% of the WM volume, respectively). White matter hyperintensity load was found to be significantly associated with both age and high blood pressure, with similar effects in both samples. With specific reference to the 650 subject cohort, we also found that WMH load provided by this automated procedure was significantly associated with visual grading of the severity of WMH, as assessed by a trained neurologist. The results show that this method is sensitive, well correlated with semi-quantitative visual rating and highly reproducible. (orig.)

  11. Automation of registration of sample weights for high-volume neutron activation analysis at the IBR-2 reactor of FLNP, JINR

    International Nuclear Information System (INIS)

    Dmitriev, A.Yu.; Dmitriev, F.A.

    2015-01-01

    The 'Weight' software tool was created at FLNP JINR to automate the reading of analytical balance readouts and saving these values in the NAA database. The analytical balance connected to the personal computer is used to measure weight values. The 'Weight' software tool controls the reading of weight values and the exchange of information with the NAA database. The weighing process of a large amount of samples is reliably provided during high-volume neutron activation analysis. [ru

  12. Automated analysis for large amount gaseous fission product gamma-scanning spectra from nuclear power plant and its data mining

    International Nuclear Information System (INIS)

    Weihua Zhang; Kurt Ungar; Ian Hoffman; Ryan Lawrie; Jarmo Ala-Heikkila

    2010-01-01

    Based on the Linssi database and UniSampo/Shaman software, an automated analysis platform has been setup for the analysis of large amounts of gamma-spectra from the primary coolant monitoring systems of a CANDU reactor. Thus, a database inventory of gaseous and volatile fission products in the primary coolant of a CANDU reactor has been established. This database is comprised of 15,000 spectra of radioisotope analysis records. Records from the database inventory were retrieved by a specifically designed data-mining module and subjected to further analysis. Results from the analysis were subsequently used to identify the reactor coolant half-life of 135 Xe and 133 Xe, as well as the correlations of 135 Xe and 88 Kr activities. (author)

  13. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals

    Science.gov (United States)

    Uy, Raymonde Charles Y.; Kury, Fabricio P.; Fontelo, Paul A.

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions. PMID:26958264

  14. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals.

    Science.gov (United States)

    Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.

  15. An automated dose tracking system for adaptive radiation therapy.

    Science.gov (United States)

    Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J

    2018-02-01

    The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient

  16. Image-based query-by-example for big databases of galaxy images

    Science.gov (United States)

    Shamir, Lior; Kuminski, Evan

    2017-01-01

    Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.

  17. Automated microaneurysm detection in diabetic retinopathy using curvelet transform

    Science.gov (United States)

    Ali Shah, Syed Ayaz; Laude, Augustinus; Faye, Ibrahima; Tang, Tong Boon

    2016-10-01

    Microaneurysms (MAs) are known to be the early signs of diabetic retinopathy (DR). An automated MA detection system based on curvelet transform is proposed for color fundus image analysis. Candidates of MA were extracted in two parallel steps. In step one, blood vessels were removed from preprocessed green band image and preliminary MA candidates were selected by local thresholding technique. In step two, based on statistical features, the image background was estimated. The results from the two steps allowed us to identify preliminary MA candidates which were also present in the image foreground. A collection set of features was fed to a rule-based classifier to divide the candidates into MAs and non-MAs. The proposed system was tested with Retinopathy Online Challenge database. The automated system detected 162 MAs out of 336, thus achieved a sensitivity of 48.21% with 65 false positives per image. Counting MA is a means to measure the progression of DR. Hence, the proposed system may be deployed to monitor the progression of DR at early stage in population studies.

  18. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  19. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    Science.gov (United States)

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-07

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  20. A new version of the RDP (Ribosomal Database Project)

    Science.gov (United States)

    Maidak, B. L.; Cole, J. R.; Parker, C. T. Jr; Garrity, G. M.; Larsen, N.; Li, B.; Lilburn, T. G.; McCaughey, M. J.; Olsen, G. J.; Overbeek, R.; hide

    1999-01-01

    The Ribosomal Database Project (RDP-II), previously described by Maidak et al. [ Nucleic Acids Res. (1997), 25, 109-111], is now hosted by the Center for Microbial Ecology at Michigan State University. RDP-II is a curated database that offers ribosomal RNA (rRNA) nucleotide sequence data in aligned and unaligned forms, analysis services, and associated computer programs. During the past two years, data alignments have been updated and now include >9700 small subunit rRNA sequences. The recent development of an ObjectStore database will provide more rapid updating of data, better data accuracy and increased user access. RDP-II includes phylogenetically ordered alignments of rRNA sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software programs for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (ftp.cme.msu. edu) and WWW (http://www.cme.msu.edu/RDP). The WWW server provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for possible chimeric rRNA sequences, automated alignment, and a suggested placement of an unknown sequence on an existing phylogenetic tree. Additional utilities also exist at RDP-II, including distance matrix, T-RFLP, and a Java-based viewer of the phylogenetic trees that can be used to create subtrees.

  1. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  2. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  3. Automated Data Aggregation for Time-Series Analysis: Study Case on Anaesthesia Data Warehouse.

    Science.gov (United States)

    Lamer, Antoine; Jeanne, Mathieu; Ficheur, Grégoire; Marcilly, Romaric

    2016-01-01

    Data stored in operational databases are not reusable directly. Aggregation modules are necessary to facilitate secondary use. They decrease volume of data while increasing the number of available information. In this paper, we present four automated engines of aggregation, integrated into an anaesthesia data warehouse. Four instances of clinical questions illustrate the use of those engines for various improvements of quality of care: duration of procedure, drug administration, assessment of hypotension and its related treatment.

  4. Automated Quality Control of in Situ Soil Moisture from the North American Soil Moisture Database Using NLDAS-2 Products

    Science.gov (United States)

    Ek, M. B.; Xia, Y.; Ford, T.; Wu, Y.; Quiring, S. M.

    2015-12-01

    The North American Soil Moisture Database (NASMD) was initiated in 2011 to provide support for developing climate forecasting tools, calibrating land surface models and validating satellite-derived soil moisture algorithms. The NASMD has collected data from over 30 soil moisture observation networks providing millions of in situ soil moisture observations in all 50 states as well as Canada and Mexico. It is recognized that the quality of measured soil moisture in NASMD is highly variable due to the diversity of climatological conditions, land cover, soil texture, and topographies of the stations and differences in measurement devices (e.g., sensors) and installation. It is also recognized that error, inaccuracy and imprecision in the data set can have significant impacts on practical operations and scientific studies. Therefore, developing an appropriate quality control procedure is essential to ensure the data is of the best quality. In this study, an automated quality control approach is developed using the North American Land Data Assimilation System phase 2 (NLDAS-2) Noah soil porosity, soil temperature, and fraction of liquid and total soil moisture to flag erroneous and/or spurious measurements. Overall results show that this approach is able to flag unreasonable values when the soil is partially frozen. A validation example using NLDAS-2 multiple model soil moisture products at the 20 cm soil layer showed that the quality control procedure had a significant positive impact in Alabama, North Carolina, and West Texas. It had a greater impact in colder regions, particularly during spring and autumn. Over 433 NASMD stations have been quality controlled using the methodology proposed in this study, and the algorithm will be implemented to control data quality from the other ~1,200 NASMD stations in the near future.

  5. Conceptual Model of an Application for Automated Generation of Webpage Mobile Versions

    Directory of Open Access Journals (Sweden)

    Todor Rachovski

    2017-11-01

    Full Text Available Accessing webpages through various types of mobile devices with different screen sizes and using different browsers has put new demands on web developers. The main challenge is the development of websites with responsive design that is adaptable depending on the mobile device used. The article presents a conceptual model of an app for automated generation of mobile pages. It has five-layer architecture: database, database management layer, business logic layer, web services layer and a presentation layer. The database stores all the data needed to run the application. The database management layer uses an ORM model to convert relational data into an object-oriented format and control the access to them. The business logic layer contains components that perform the actual work on building a mobile version of the page, including parsing, building a hierarchical model of the page and a number of transformations. The web services layer provides external applications with access to lower-level functionalities, and the presentation layer is responsible for choosing and using the appropriate CSS. A web application that uses the proposed model was developed and experiments were conducted.

  6. 78 FR 53466 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-08-29

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... National Customs Automation Program (NCAP) tests concerning document imaging, known as the Document Image... the National Customs Automation Program (NCAP) tests concerning document imaging, known as the...

  7. Quebec Trophoblastic Disease Registry: how to make an easy-to-use dynamic database.

    Science.gov (United States)

    Sauthier, Philippe; Breguet, Magali; Rozenholc, Alexandre; Sauthier, Michaël

    2015-05-01

    To create an easy-to-use dynamic database designed specifically for the Quebec Trophoblastic Disease Registry (RMTQ). It is now well established that much of the success in managing trophoblastic diseases comes from the development of national and regional reference centers. Computerized databases allow the optimal use of data stored in these centers. We have created an electronic data registration system by producing a database using FileMaker Pro 12. It uses 11 external tables associated with a unique identification number for each patient. Each table allows specific data to be recorded, incorporating demographics, diagnosis, automated staging, laboratory values, pathological diagnosis, and imaging parameters. From January 1, 2009, to December 31, 2013, we used our database to register 311 patients with 380 diseases and have seen a 39.2% increase in registrations each year between 2009 and 2012. This database allows the automatic generation of semilogarithmic curves, which take into account β-hCG values as a function of time, complete with graphic markers for applied treatments (chemotherapy, radiotherapy, or surgery). It generates a summary sheet for a synthetic vision in real time. We have created, at a low cost, an easy-to-use database specific to trophoblastic diseases that dynamically integrates staging and monitoring. We propose a 10-step procedure for a successful trophoblastic database. It improves patient care, research, and education on trophoblastic diseases in Quebec and leads to an opportunity for collaboration on a national Canadian registry.

  8. Time-critical database condition data handling in the CMS experiment during the first data taking period

    CERN Document Server

    Di Guida, Salvatore

    2011-01-01

    Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. To recover the system and to put it in a safe state requires spotting a faulty situation within strict time constraints. We will describe here the system put in place in the CMS experiment to automate the processes that populate centrally the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are ``dropped'' by the users in dedicate services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately acce...

  9. Current concepts in clinical research: web-based, automated, arthroscopic surgery prospective database registry.

    Science.gov (United States)

    Lubowitz, James H; Smith, Patrick A

    2012-03-01

    In 2011, postsurgical patient outcome data may be compiled in a research registry, allowing comparative-effectiveness research and cost-effectiveness analysis by use of Health Insurance Portability and Accountability Act-compliant, institutional review board-approved, Food and Drug Administration-approved, remote, Web-based data collection systems. Computerized automation minimizes cost and minimizes surgeon time demand. A research registry can be a powerful tool to observe and understand variations in treatment and outcomes, to examine factors that influence prognosis and quality of life, to describe care patterns, to assess effectiveness, to monitor safety, and to change provider practice through feedback of data. Registry of validated, prospective outcome data is required for arthroscopic and related researchers and the public to advocate with governments and health payers. The goal is to develop evidence-based data to determine the best methods for treating patients. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  10. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  11. PRODORIC2: the bacterial gene regulation database in 2018

    Science.gov (United States)

    Dudek, Christian-Alexander; Hartlich, Juliane; Brötje, David; Jahn, Dieter

    2018-01-01

    Abstract Bacteria adapt to changes in their environment via differential gene expression mediated by DNA binding transcriptional regulators. The PRODORIC2 database hosts one of the largest collections of DNA binding sites for prokaryotic transcription factors. It is the result of the thoroughly redesigned PRODORIC database. PRODORIC2 is more intuitive and user-friendly. Besides significant technical improvements, the new update offers more than 1000 new transcription factor binding sites and 110 new position weight matrices for genome-wide pattern searches with the Virtual Footprint tool. Moreover, binding sites deduced from high-throughput experiments were included. Data for 6 new bacterial species including bacteria of the Rhodobacteraceae family were added. Finally, a comprehensive collection of sigma- and transcription factor data for the nosocomial pathogen Clostridium difficile is now part of the database. PRODORIC2 is publicly available at http://www.prodoric2.de. PMID:29136200

  12. Adaptive and Adaptable Automation Design: A Critical Review of the Literature and Recommendations for Future Research

    Science.gov (United States)

    Prinzel, Lawrence J., III; Kaber, David B.

    2006-01-01

    This report presents a review of literature on approaches to adaptive and adaptable task/function allocation and adaptive interface technologies for effective human management of complex systems that are likely to be issues for the Next Generation Air Transportation System, and a focus of research under the Aviation Safety Program, Integrated Intelligent Flight Deck Project. Contemporary literature retrieved from an online database search is summarized and integrated. The major topics include the effects of delegation-type, adaptable automation on human performance, workload and situation awareness, the effectiveness of various automation invocation philosophies and strategies to function allocation in adaptive systems, and the role of user modeling in adaptive interface design and the performance implications of adaptive interface technology.

  13. Automated Aerial Refueling Hitches a Ride on AFF

    Science.gov (United States)

    Hansen, Jennifer L.; Murray, James E.; Bever, Glenn; Campos, Norma V.; Schkolnik, Gerard

    2007-01-01

    The recent introduction of uninhabited aerial vehicles [UAVs (basically, remotely piloted or autonomous aircraft)] has spawned new developments in autonomous operation and posed new challenges. Automated aerial refueling (AAR) is a capability that will enable UAVs to travel greater distances and loiter longer over targets. NASA Dryden Flight Research Center, in cooperation with the Defense Advanced Research Projects Agency (DARPA), the Naval Air Systems Command (NAVAIR), the Naval Air Force Pacific Fleet, and the Air Force Research Laboratory, rapidly conceived and accomplished an AAR flight research project focused on collecting a unique, high-quality database on the dynamics of the hose and drogue of an aerial refueling system. This flight-derived database would be used to validate mathematical models of the dynamics in support of design and analysis of AAR systems for future UAVs. The project involved the use of two Dryden F/A-18 airplanes and an S-3 hose-drogue refueling store on loan from the Navy. In this year-long project, which was started on October 1, 2002, 583 research maneuvers were completed during 23 flights.

  14. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  15. THE AUTOMATION SYSTEM OF ACCOUNTING SPORTING ACTIVITIES

    Directory of Open Access Journals (Sweden)

    A. A. Shpinkovski

    2016-12-01

    Full Text Available In recent years become a popular healthy lifestyle. Sport is an activity that serves the public interest by implementing educational, preparatory and communicative function, but not a constant specialty (profession person. The development of current sports, requires resources that will not only automate the work of the organizers of the competition with the information, but also improve its effectiveness. One of the progressive sports at the moment is powerlifting (power triathlon. The proposed information system is an automated workplace of the organizer. During the implementation of the information system support sports competitions were performed system design using UML diagrams. It is possible to understand the tasks that must be done in the implementation of the program. The program is organized so that you can quickly and simply add all data about upcoming competitions: the name, location, date, the composition of the referee. After receiving information about the athletes, coaches, sports clubs, to make an application for participation in the competition. Also, the organizer has the ability to view statistics about the coaches, referees, participants, based on the information entered into the database earlier. A detailed user guide program. A functional testing and usability. Recommendations for further development of the software product.

  16. Automated search method for AFM and profilers

    Science.gov (United States)

    Ray, Michael; Martin, Yves C.

    2001-08-01

    A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.

  17. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  18. Recent Progress on Data-Based Optimization for Mineral Processing Plants

    Directory of Open Access Journals (Sweden)

    Jinliang Ding

    2017-04-01

    Full Text Available In the globalized market environment, increasingly significant economic and environmental factors within complex industrial plants impose importance on the optimization of global production indices; such optimization includes improvements in production efficiency, product quality, and yield, along with reductions of energy and resource usage. This paper briefly overviews recent progress in data-driven hybrid intelligence optimization methods and technologies in improving the performance of global production indices in mineral processing. First, we provide the problem description. Next, we summarize recent progress in data-based optimization for mineral processing plants. This optimization consists of four layers: optimization of the target values for monthly global production indices, optimization of the target values for daily global production indices, optimization of the target values for operational indices, and automation systems for unit processes. We briefly overview recent progress in each of the different layers. Finally, we point out opportunities for future works in data-based optimization for mineral processing plants.

  19. WIDAFELS flexible automation systems

    International Nuclear Information System (INIS)

    Shende, P.S.; Chander, K.P.; Ramadas, P.

    1990-01-01

    After discussing the various aspects of automation, some typical examples of various levels of automation are given. One of the examples is of automated production line for ceramic fuel pellets. (M.G.B.)

  20. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  1. Knowledge structure representation and automated updates in intelligent information management systems

    Science.gov (United States)

    Corey, Stephen; Carnahan, Richard S., Jr.

    1990-01-01

    A continuing effort to apply rapid prototyping and Artificial Intelligence techniques to problems associated with projected Space Station-era information management systems is examined. In particular, timely updating of the various databases and knowledge structures within the proposed intelligent information management system (IIMS) is critical to support decision making processes. Because of the significantly large amounts of data entering the IIMS on a daily basis, information updates will need to be automatically performed with some systems requiring that data be incorporated and made available to users within a few hours. Meeting these demands depends first, on the design and implementation of information structures that are easily modified and expanded, and second, on the incorporation of intelligent automated update techniques that will allow meaningful information relationships to be established. Potential techniques are studied for developing such an automated update capability and IIMS update requirements are examined in light of results obtained from the IIMS prototyping effort.

  2. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  3. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  4. Role for automated communication strategies in medication adherence management.

    Science.gov (United States)

    Ross, S Michael

    2008-11-01

    Lack of medication adherence is a prevalent problem that causes a broad range of health-and health-economics-related issues. Adherence management is therefore an important strategy, but it also presents its own set of challenges. Interventional communication from care support teams at managed care organizations and disease management and wellness programs has proved effective at modifying patients' medication adherence and reporting behaviors. However, these communications do not work well from an economic standpoint. It is not economically feasible to scale call centers and the numbers of clinical and professional staff to communicate with the increasing number of patients with chronic diseases who require ongoing medication use. Using communication automation to augment traditional call center outreach can help to mediate patient medication-taking behaviors. Specific design criteria for the automation of this interaction are discussed in this article, offering supporting data from a recent trial of 304 elderly patients with hypertension, and showing the benefits of using such a system for effective blood pressure monitoring, at reduced costs.

  5. Automation of High-Throughput Crystal Screening and Data Collection at SSRL

    International Nuclear Information System (INIS)

    Miller, Mitchell D.; Brinen, Linda S.; Deacon, Ashley M.; Bedem, Henry van den; Wolf, Guenter; Xu Qingping; Zhang Zepu; Cohen, Aina; Ellis, Paul; McPhillips, Scott E.; McPhillips, Timothy M.; Phizackerley, R. Paul; Soltis, S. Michael

    2004-01-01

    A robotic system for auto-mounting crystals from liquid nitrogen is now operational on SSRL beamlines (Cohen et al., J. Appl. Cryst. (2002). 35, 720-726). The system uses a small industrial 4-axis robot with a custom built actuator. Once mounted, automated alignment of the sample loop to the X-ray beam readies the crystal for data collection. After data collection, samples are returned to the cassette. The beamline Dewar accommodates three compact sample cassettes (holding up to 96 samples each). During the past 4 months, the system on beamline 11-1 has been used to screen over 1000 crystals. The system has reduced both screening time and manpower. Integration of the hardware components is accomplished in the Distributed Control System architecture developed at SSRL (McPhillips et al., J. Synchrotron Rad. (2002) 9, 401-406). A crystal-screening interface has been implemented in Blu-Ice. Sample details can be uploaded from an Excel spreadsheet. The JCSG generates these spreadsheets automatically from their tracking database using standard database tools (http://www.jcsg.org). New diffraction image analysis tools are being employed to aid in extracting results. Automation also permits tele-presence. For example, samples have been changed during the night without leaving home and scientists have screened crystals 1600 miles from the beamline. The system developed on beamline 11-1 has been replicated onto 1-5, 9-1, 9-2, and 11-3 and is used by both general users and the JCSG

  6. Automatic treatment planning implementation using a database of previously treated patients

    International Nuclear Information System (INIS)

    Moore, J A; Evans, K; Yang, W; Herman, J; McNutt, T

    2014-01-01

    Purpose: Using a database of prior treated patients, it is possible to predict the dose to critical structures for future patients. Automatic treatment planning speeds the planning process by generating a good initial plan from predicted dose values. Methods: A SQL relational database of previously approved treatment plans is populated via an automated export from Pinnacle 3 . This script outputs dose and machine information and selected Regions of Interests as well as its associated Dose-Volume Histogram (DVH) and Overlap Volume Histograms (OVHs) with respect to the target structures. Toxicity information is exported from Mosaiq and added to the database for each patient. The SQL query is designed to ask the system for the lowest achievable dose for a specified region of interest (ROI) for each patient with a given volume of that ROI being as close or closer to the target than the current patient. Results: The additional time needed to calculate OVHs is approximately 1.5 minutes for a typical patient. Database lookup of planning objectives takes approximately 4 seconds. The combined additional time is less than that of a typical single plan optimization (2.5 mins). Conclusions: An automatic treatment planning interface has been successfully used by dosimetrists to quickly produce a number of SBRT pancreas treatment plans. The database can be used to compare dose to individual structures with the toxicity experienced and predict toxicities before planning for future patients.

  7. The automated data processing architecture for the GPI Exoplanet Survey

    Science.gov (United States)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Graham, James R.; Macintosh, Bruce

    2017-09-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the GPIES Data Cruncher, combines multiple data reduction pipelines together to intelligently process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  8. Automation of Test Cases for Web Applications : Automation of CRM Test Cases

    OpenAIRE

    Seyoum, Alazar

    2012-01-01

    The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...

  9. Fully automated gamma spectrometry gauge observing possible radioactive contamination of melting-shop samples

    International Nuclear Information System (INIS)

    Kroos, J.; Westkaemper, G.; Stein, J.

    1999-01-01

    At Salzgitter AG, several monitoring systems have been installed to check the scrap transport by rail and by car. At the moment, the scrap transport by ship is reloaded onto wagons for monitoring afterwards. In the future, a detection system will be mounted onto a crane for a direct check on scrap upon the departure of ship. Furthermore, at Salzgitter AG Central Chemical Laboratory, a fully automated gamma spectrometry gauge is installed in order to observe a possible radioactive contamination of the products. The gamma spectrometer is integrated into the automated OE spectrometry line for testing melting shop samples after performing the OE spectrometry. With this technique the specific activity of selected nuclides and dose rate will be determined. The activity observation is part of the release procedure. The corresponding measurement data are stored in a database for quality management reasons. (author)

  10. Automated Literature Searches for Longitudinal Tracking of Cancer Research Training Program Graduates.

    Science.gov (United States)

    Padilla, Luz A; Desmond, Renee A; Brooks, C Michael; Waterbor, John W

    2018-06-01

    A key outcome measure of cancer research training programs is the number of cancer-related peer-reviewed publications after training. Because program graduates do not routinely report their publications, staff must periodically conduct electronic literature searches on each graduate. The purpose of this study is to compare findings of an innovative computer-based automated search program versus repeated manual literature searches to identify post-training peer-reviewed publications. In late 2014, manual searches for publications by former R25 students identified 232 cancer-related articles published by 112 of 543 program graduates. In 2016, a research assistant was instructed in performing Scopus literature searches for comparison with individual PubMed searches on our 543 program graduates. Through 2014, Scopus found 304 cancer publications, 220 of that had been retrieved manually plus an additional 84 papers. However, Scopus missed 12 publications found manually. Together, both methods found 316 publications. The automated method found 96.2 % of the 316 publications while individual searches found only 73.4 %. An automated search method such as using the Scopus database is a key tool for conducting comprehensive literature searches, but it must be supplemented with periodic manual searches to find the initial publications of program graduates. A time-saving feature of Scopus is the periodic automatic alerts of new publications. Although a training period is needed and initial costs can be high, an automated search method is worthwhile due to its high sensitivity and efficiency in the long term.

  11. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  12. Automated planning of breast radiotherapy using cone beam CT imaging

    International Nuclear Information System (INIS)

    Amit, Guy; Purdie, Thomas G.

    2015-01-01

    Purpose: Develop and clinically validate a methodology for using cone beam computed tomography (CBCT) imaging in an automated treatment planning framework for breast IMRT. Methods: A technique for intensity correction of CBCT images was developed and evaluated. The technique is based on histogram matching of CBCT image sets, using information from “similar” planning CT image sets from a database of paired CBCT and CT image sets (n = 38). Automated treatment plans were generated for a testing subset (n = 15) on the planning CT and the corrected CBCT. The plans generated on the corrected CBCT were compared to the CT-based plans in terms of beam parameters, dosimetric indices, and dose distributions. Results: The corrected CBCT images showed considerable similarity to their corresponding planning CTs (average mutual information 1.0±0.1, average sum of absolute differences 185 ± 38). The automated CBCT-based plans were clinically acceptable, as well as equivalent to the CT-based plans with average gantry angle difference of 0.99°±1.1°, target volume overlap index (Dice) of 0.89±0.04 although with slightly higher maximum target doses (4482±90 vs 4560±84, P < 0.05). Gamma index analysis (3%, 3 mm) showed that the CBCT-based plans had the same dose distribution as plans calculated with the same beams on the registered planning CTs (average gamma index 0.12±0.04, gamma <1 in 99.4%±0.3%). Conclusions: The proposed method demonstrates the potential for a clinically feasible and efficient online adaptive breast IMRT planning method based on CBCT imaging, integrating automation

  13. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  14. Measurement of TLR-induced macrophage spreading by automated image analysis: differential role of Myd88 and MAPK in early and late responses

    Directory of Open Access Journals (Sweden)

    Jens eWenzel

    2011-10-01

    Full Text Available Sensing of infectious danger by Toll-like receptors (TLR on macrophages causes not only a reprogramming of the transcriptome but also changes in the cytoskeleton important for cell spreading and motility. Since manual determination of cell contact areas from fluorescence microscopy pictures is very time consuming and prone to bias, we have developed and tested algorithms for automated measurement of macrophage spreading. The two-step method combines identification of cells by nuclear staining with DAPI and cell surface staining of the integrin CD11b. Automated image analysis correlated very well with manual annotation in resting macrophages and early after stimulation, whereas at later time points the automated cell segmentation algorithm and manual annotation showed slightly larger variation. The method was applied to investigate the impact of genetic or pharmacological inhibition of known TLR signaling components. Deificiency in the adapter protein Myd88 strongly reduced spreading activity at the late time points, but had no impact early after LPS stimulation. A similar effect was observed upon pharmacological inhibition of MEK1, the kinase activating the MAPK ERK1/2, indicating that ERK1/2 mediates Myd88-dependent macrophages spreading. In contrast, macrophages lacking the MAPK p38 were impaired in the initial spreading response but responded normally 8 – 24 h after stimulation. The dichotomy of p38 and ERK1/2 MAPK effects on early and late macrophage spreading raises the question which of the respective substrate proteins mediate(s cytoskeletal remodeling and spreading. The automated measurement of cell spreading described here increases the objectivity and greatly reduces the time required for such investigations and is therefore expected to facilitate larger through-put analysis of macrophage spreading, e.g. in siRNA knockdown screens.

  15. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  16. A new fully automated FTIR system for total column measurements of greenhouse gases

    Science.gov (United States)

    Geibel, M. C.; Gerbig, C.; Feist, D. G.

    2010-10-01

    This article introduces a new fully automated FTIR system that is part of the Total Carbon Column Observing Network (TCCON). It will provide continuous ground-based measurements of column-averaged volume mixing ratio for CO2, CH4 and several other greenhouse gases in the tropics. Housed in a 20-foot shipping container it was developed as a transportable system that could be deployed almost anywhere in the world. We describe the automation concept which relies on three autonomous subsystems and their interaction. Crucial components like a sturdy and reliable solar tracker dome are described in detail. The automation software employs a new approach relying on multiple processes, database logging and web-based remote control. First results of total column measurements at Jena, Germany show that the instrument works well and can provide parts of the diurnal as well as seasonal cycle for CO2. Instrument line shape measurements with an HCl cell suggest that the instrument stays well-aligned over several months. After a short test campaign for side by side intercomaprison with an existing TCCON instrument in Australia, the system will be transported to its final destination Ascension Island.

  17. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  18. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  19. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  20. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  1. Propensity score methodology for confounding control in health care utilization databases

    Directory of Open Access Journals (Sweden)

    Elisabetta Patorno

    2013-06-01

    Full Text Available Propensity score (PS methodology is a common approach to control for confounding in nonexperimental studies of treatment effects using health care utilization databases. This methodology offers researchers many advantages compared with conventional multivariate models: it directly focuses on the determinants of treatment choice, facilitating the understanding of the clinical decision-making process by the researcher; it allows for graphical comparisons of the distribution of propensity scores and truncation of subjects without overlapping PS indicating a lack of equipoise; it allows transparent assessment of the confounder balance achieved by the PS at baseline; and it offers a straightforward approach to reduce the dimensionality of sometimes large arrays of potential confounders in utilization databases, directly addressing the “curse of dimensionality” in the context of rare events. This article provides an overview of the use of propensity score methodology for pharmacoepidemiologic research with large health care utilization databases, covering recent discussions on covariate selection, the role of automated techniques for addressing unmeasurable confounding via proxies, strategies to maximize clinical equipoise at baseline, and the potential of machine-learning algorithms for optimized propensity score estimation. The appendix discusses the available software packages for PS methodology. Propensity scores are a frequently used and versatile tool for transparent and comprehensive adjustment of confounding in pharmacoepidemiology with large health care databases.

  2. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  3. SPSS and SAS programming for the testing of mediation models.

    Science.gov (United States)

    Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S

    2004-01-01

    Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and sobel.sas, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.

  4. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  5. PIPEMicroDB: microsatellite database and primer generation tool for pigeonpea genome.

    Science.gov (United States)

    Sarika; Arora, Vasu; Iquebal, M A; Rai, Anil; Kumar, Dinesh

    2013-01-01

    Molecular markers play a significant role for crop improvement in desirable characteristics, such as high yield, resistance to disease and others that will benefit the crop in long term. Pigeonpea (Cajanus cajan L.) is the recently sequenced legume by global consortium led by ICRISAT (Hyderabad, India) and been analysed for gene prediction, synteny maps, markers, etc. We present PIgeonPEa Microsatellite DataBase (PIPEMicroDB) with an automated primer designing tool for pigeonpea genome, based on chromosome wise as well as location wise search of primers. Total of 123 387 Short Tandem Repeats (STRs) were extracted from pigeonpea genome, available in public domain using MIcroSAtellite tool (MISA). The database is an online relational database based on 'three-tier architecture' that catalogues information of microsatellites in MySQL and user-friendly interface is developed using PHP. Search for STRs may be customized by limiting their location on chromosome as well as number of markers in that range. This is a novel approach and is not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of selected markers with left and right flankings of size up to 500 bp. This will enable researchers to select markers of choice at desired interval over the chromosome. Furthermore, one can use individual STRs of a targeted region over chromosome to narrow down location of gene of interest or linked Quantitative Trait Loci (QTLs). Although it is an in silico approach, markers' search based on characteristics and location of STRs is expected to be beneficial for researchers. Database URL: http://cabindb.iasri.res.in/pigeonpea/

  6. Development of an automation software for reconciliation of INIS/ETDE thesauruses

    International Nuclear Information System (INIS)

    Singh, Manoj; Gupta, Rajiv; Prakasan, E.R.; Vijai Kumar

    1999-01-01

    ETDE (Energy Technology Data Exchange) and INIS (International Nuclear Information System) thesauruses contain nearly twenty thousand descriptors and are not necessarily identical. A project has been undertaken by the international organisations to make a common thesaurus for both INIS and ETDE to facilitate better exchange and retrieval of information between/from these databases. This paper describes the automation implemented during our participation in the project for reconcile the structures of the word blocks in the ETDE and INIS thesauruses, with respect to the descriptors currently in the two thesauruses through a PC based RDBMS Software. The software THEMERGE was developed in FoxPro 2.5 Relational Database Management Systems. The software handles all possible reconcile recommendation suggested by specialist, printing the recommendation sheet for uploading it later. This has not only widened the scope of flexibility, portability and convertibility of recommendations, but also helped to achieve quicker project completion. (author)

  7. A Database for Reviewing and Selecting Radioactive Waste Treatment Technologies and Vendors

    International Nuclear Information System (INIS)

    P. C. Marushia; W. E. Schwinkendorf

    1999-01-01

    Several attempts have been made in past years to collate and present waste management technologies and solutions to waste generators. These efforts have been manifested as reports, buyers' guides, and databases. While this information is helpful at the time it is assembled, the principal weakness is maintaining the timeliness and accuracy of the information over time. In many cases, updates have to be published or developed as soon as the product is disseminated. The recently developed National Low-Level Waste Management Program's Technologies Database is a vendor-updated Internet based database designed to overcome this problem. The National Low-Level Waste Management Program's Technologies Database contains information about waste types, treatment technologies, and vendor information. Information is presented about waste types, typical treatments, and the vendors who provide those treatment methods. The vendors who provide services update their own contact information, their treatment processes, and the types of wastes for which their treatment process is applicable. This information is queriable by a generator of low-level or mixed low-level radioactive waste who is seeking information on waste treatment methods and the vendors who provide them. Timeliness of the information in the database is assured using time clocks and automated messaging to remind featured vendors to keep their information current. Failure to keep the entries current results in a vendor being warned and then ultimately dropped from the database. This assures that the user is dealing with the most current information available and the vendors who are active in reaching and serving their market

  8. 78 FR 66039 - Modification of National Customs Automation Program Test Concerning Automated Commercial...

    Science.gov (United States)

    2013-11-04

    ... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly...) plan to both rename and modify the National Customs Automation Program (NCAP) test concerning the... data elements required to obtain release for cargo transported by air. The test will now be known as...

  9. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    Science.gov (United States)

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  10. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  11. Automation-aided Task Loads Index based on the Automation Rate Reflecting the Effects on Human Operators in NPPs

    International Nuclear Information System (INIS)

    Lee, Seungmin; Seong, Poonghyun; Kim, Jonghyun

    2013-01-01

    Many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs) was suggested. These suggested measures express how much automation support human operators but it cannot express the change of human operators' workload, whether the human operators' workload is increased or decreased. Before considering automation rates, whether the adopted automation is good or bad might be estimated in advance. In this study, to estimate the appropriateness of automation according to the change of the human operators' task loads, automation-aided task loads index is suggested based on the concept of the suggested automation rate. To insure plant safety and efficiency on behalf of human operators, various automation systems have been installed in NPPs, and many works which were previously conducted by human operators can now be supported by computer-based operator aids. According to the characteristics of the automation types, the estimation method of the system automation and the cognitive automation rate were suggested. The proposed estimation method concentrates on the effects of introducing automation, so it directly express how much the automated system support human operators. Based on the suggested automation rates, the way to estimate how much the automated system can affect the human operators' cognitive task load is suggested in this study. When there is no automation, the calculated index is 1, and it means there is no change of human operators' task load

  12. Automated training for algorithms that learn from genomic data.

    Science.gov (United States)

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  13. Asleep at the automated wheel-Sleepiness and fatigue during highly automated driving.

    Science.gov (United States)

    Vogelpohl, Tobias; Kühn, Matthias; Hummel, Thomas; Vollrath, Mark

    2018-03-20

    Due to the lack of active involvement in the driving situation and due to monotonous driving environments drivers with automation may be prone to become fatigued faster than manual drivers (e.g. Schömig et al., 2015). However, little is known about the progression of fatigue during automated driving and its effects on the ability to take back manual control after a take-over request. In this driving simulator study with Nö=ö60 drivers we used a three factorial 2ö×ö2ö×ö12 mixed design to analyze the progression (12ö×ö5ömin; within subjects) of driver fatigue in drivers with automation compared to manual drivers (between subjects). Driver fatigue was induced as either mainly sleep related or mainly task related fatigue (between subjects). Additionally, we investigated the drivers' reactions to a take-over request in a critical driving scenario to gain insights into the ability of fatigued drivers to regain manual control and situation awareness after automated driving. Drivers in the automated driving condition exhibited facial indicators of fatigue after 15 to 35ömin of driving. Manual drivers only showed similar indicators of fatigue if they suffered from a lack of sleep and then only after a longer period of driving (approx. 40ömin). Several drivers in the automated condition closed their eyes for extended periods of time. In the driving with automation condition mean automation deactivation times after a take-over request were slower for a certain percentage (about 30%) of the drivers with a lack of sleep (Mö=ö3.2; SDö=ö2.1ös) compared to the reaction times after a long drive (Mö=ö2.4; SDö=ö0.9ös). Drivers with automation also took longer than manual drivers to first glance at the speed display after a take-over request and were more likely to stay behind a braking lead vehicle instead of overtaking it. Drivers are unable to stay alert during extended periods of automated driving without non-driving related tasks. Fatigued drivers could

  14. Procedure automation: the effect of automated procedure execution on situation awareness and human performance

    International Nuclear Information System (INIS)

    Andresen, Gisle; Svengren, Haakan; Heimdal, Jan O.; Nilsen, Svein; Hulsund, John-Einar; Bisio, Rossella; Debroise, Xavier

    2004-04-01

    As advised by the procedure workshop convened in Halden in 2000, the Halden Project conducted an experiment on the effect of automation of Computerised Procedure Systems (CPS) on situation awareness and human performance. The expected outcome of the study was to provide input for guidance on CPS design, and to support the Halden Project's ongoing research on human reliability analysis. The experiment was performed in HAMMLAB using the HAMBO BWR simulator and the COPMA-III CPS. Eight crews of operators from Forsmark 3 and Oskarshamn 3 participated. Three research questions were investigated: 1) Does procedure automation create Out-Of-The-Loop (OOTL) performance problems? 2) Does procedure automation affect situation awareness? 3) Does procedure automation affect crew performance? The independent variable, 'procedure configuration', had four levels: paper procedures, manual CPS, automation with breaks, and full automation. The results showed that the operators experienced OOTL problems in full automation, but that situation awareness and crew performance (response time) were not affected. One possible explanation for this is that the operators monitored the automated procedure execution conscientiously, something which may have prevented the OOTL problems from having negative effects on situation awareness and crew performance. In a debriefing session, the operators clearly expressed their dislike for the full automation condition, but that automation with breaks could be suitable for some tasks. The main reason why the operators did not like the full automation was that they did not feel being in control. A qualitative analysis addressing factors contributing to response time delays revealed that OOTL problems did not seem to cause delays, but that some delays could be explained by the operators having problems with the freeze function of the CPS. Also other factors such as teamwork and operator tendencies were of importance. Several design implications were drawn

  15. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  16. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  17. Automated T2 relaxometry of the hippocampus for temporal lobe epilepsy.

    Science.gov (United States)

    Winston, Gavin P; Vos, Sjoerd B; Burdett, Jane L; Cardoso, M Jorge; Ourselin, Sebastien; Duncan, John S

    2017-09-01

    Hippocampal sclerosis (HS), the most common cause of refractory temporal lobe epilepsy, is associated with hippocampal volume loss and increased T2 signal. These can be identified on quantitative imaging with hippocampal volumetry and T2 relaxometry. Although hippocampal segmentation for volumetry has been automated, T2 relaxometry currently involves subjective and time-consuming manual delineation of regions of interest. In this work, we develop and validate an automated technique for hippocampal T2 relaxometry. Fifty patients with unilateral or bilateral HS and 50 healthy controls underwent T 1 -weighted and dual-echo fast recovery fast spin echo scans. Hippocampi were automatically segmented using a multi-atlas-based segmentation algorithm (STEPS) and a template database. Voxelwise T2 maps were determined using a monoexponential fit. The hippocampal segmentations were registered to the T2 maps and eroded to reduce partial volume effect. Voxels with T2 >170 msec excluded to minimize cerebrospinal fluid (CSF) contamination. Manual determination of T2 values was performed twice in each subject. Twenty controls underwent repeat scans to assess interscan reproducibility. Hippocampal T2 values were reliably determined using the automated method. There was a significant ipsilateral increase in T2 values in HS (p epilepsy. © 2017 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.

  18. Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey

    Science.gov (United States)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.

    2018-01-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  19. Automation-aided Task Loads Index based on the Automation Rate Reflecting the Effects on Human Operators in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seungmin; Seong, Poonghyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Jonghyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-05-15

    Many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs) was suggested. These suggested measures express how much automation support human operators but it cannot express the change of human operators' workload, whether the human operators' workload is increased or decreased. Before considering automation rates, whether the adopted automation is good or bad might be estimated in advance. In this study, to estimate the appropriateness of automation according to the change of the human operators' task loads, automation-aided task loads index is suggested based on the concept of the suggested automation rate. To insure plant safety and efficiency on behalf of human operators, various automation systems have been installed in NPPs, and many works which were previously conducted by human operators can now be supported by computer-based operator aids. According to the characteristics of the automation types, the estimation method of the system automation and the cognitive automation rate were suggested. The proposed estimation method concentrates on the effects of introducing automation, so it directly express how much the automated system support human operators. Based on the suggested automation rates, the way to estimate how much the automated system can affect the human operators' cognitive task load is suggested in this study. When there is no automation, the calculated index is 1, and it means there is no change of human operators' task load.

  20. Aspects of the design of the automated system for code generation of electrical items of technological equipment

    Directory of Open Access Journals (Sweden)

    Erokhin V.V.

    2017-09-01

    Full Text Available The article presents the aspects of designing an automated system for generating codes for electrical elements of process equipment using CASE-means. We propose our own technology of iterative development of such systems. The proposed methodology uses the tool to develop the ERwin Data Modeler databases of Computer Associates and the author's tool for the automatic generation of ERwin Class Builder code. The implemented design tool is a superstructure over the ERwin Data Modeler from Computer Associates, which extends its functionality. ERwin Data Modeler works with logical and physical data models and allows you to generate a description of the database and ddl-scripts.

  1. Unlimited Thirst for Genome Sequencing, Data Interpretation, and Database Usage in Genomic Era: The Road towards Fast-Track Crop Plant Improvement

    Directory of Open Access Journals (Sweden)

    Arun Prabhu Dhanapal

    2015-01-01

    Full Text Available The number of sequenced crop genomes and associated genomic resources is growing rapidly with the advent of inexpensive next generation sequencing methods. Databases have become an integral part of all aspects of science research, including basic and applied plant and animal sciences. The importance of databases keeps increasing as the volume of datasets from direct and indirect genomics, as well as other omics approaches, keeps expanding in recent years. The databases and associated web portals provide at a minimum a uniform set of tools and automated analysis across a wide range of crop plant genomes. This paper reviews some basic terms and considerations in dealing with crop plant databases utilization in advancing genomic era. The utilization of databases for variation analysis with other comparative genomics tools, and data interpretation platforms are well described. The major focus of this review is to provide knowledge on platforms and databases for genome-based investigations of agriculturally important crop plants. The utilization of these databases in applied crop improvement program is still being achieved widely; otherwise, the end for sequencing is not far away.

  2. IFLA General Conference, 1986. Pre-Conference Seminar on Automated Systems for Access to Multilingual and Multiscript Library materials: Problems and Solutions. Papers.

    Science.gov (United States)

    International Federation of Library Associations and Institutions, The Hague (Netherlands).

    A seminar which considered problems and solutions regarding automated systems for access to multilingual and multiscript library materials was held as a pre-session before the IFLA general conference in 1986. Papers presented include: (1) "Romanized and Transliterated Databases of Asian Language Materials--History, Problems, and…

  3. Automation of radioimmunoassay

    International Nuclear Information System (INIS)

    Yamaguchi, Chisato; Yamada, Hideo; Iio, Masahiro

    1974-01-01

    Automation systems for measuring Australian antigen by radioimmunoassay under development were discussed. Samples were processed as follows: blood serum being dispensed by automated sampler to the test tube, and then incubated under controlled time and temperature; first counting being omitted; labelled antibody being dispensed to the serum after washing; samples being incubated and then centrifuged; radioactivities in the precipitate being counted by auto-well counter; measurements being tabulated by automated typewriter. Not only well-type counter but also position counter was studied. (Kanao, N.)

  4. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-08-14

    ... National Customs Automation Program (NCAP) test concerning the simplified entry functionality in the... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of...

  5. Automated genome mining of ribosomal peptide natural products

    Energy Technology Data Exchange (ETDEWEB)

    Mohimani, Hosein; Kersten, Roland; Liu, Wei; Wang, Mingxun; Purvine, Samuel O.; Wu, Si; Brewer, Heather M.; Pasa-Tolic, Ljiljana; Bandeira, Nuno; Moore, Bradley S.; Pevzner, Pavel A.; Dorrestein, Pieter C.

    2014-07-31

    Ribosomally synthesized and posttranslationally modified peptides (RiPPs), especially from microbial sources, are a large group of bioactive natural products that are a promising source of new (bio)chemistry and bioactivity (1). In light of exponentially increasing microbial genome databases and improved mass spectrometry (MS)-based metabolomic platforms, there is a need for computational tools that connect natural product genotypes predicted from microbial genome sequences with their corresponding chemotypes from metabolomic datasets. Here, we introduce RiPPquest, a tandem mass spectrometry database search tool for identification of microbial RiPPs and apply it for lanthipeptide discovery. RiPPquest uses genomics to limit search space to the vicinity of RiPP biosynthetic genes and proteomics to analyze extensive peptide modifications and compute p-values of peptide-spectrum matches (PSMs). We highlight RiPPquest by connection of multiple RiPPs from extracts of Streptomyces to their gene clusters and by the discovery of a new class III lanthipeptide, informatipeptin, from Streptomyces viridochromogenes DSM 40736 as the first natural product to be identified in an automated fashion by genome mining. The presented tool is available at cy-clo.ucsd.edu.

  6. A windows based automated quality control system for the ICP-AES analysis of Waste Isolation Pilot Plant (WIPP) brines

    International Nuclear Information System (INIS)

    Gerth, D.J.

    1996-01-01

    High sample volume analytical laboratories typically require automation of tasks to maximize efficiency and productivity. Typical approaches target instrument operation and data reporting (LIMS), but frequently ignore the data evaluation and run time QC aspects. Automation of these steps can save up to 50% of the time it takes to analyze, evaluate, and report data from a typical ICP-AES run. The program developed in this project addresses this need by performing a CLP-style evaluation of the run time QC data included in an instrument run. Written in Microcraft Visual Basic 3.0, it makes use of a Microsoft Access database to store method parameters and QC sample results for control charting. In operation, the analyst enters method background data (e.g., control samples types and acceptance criteria for each analyte), which is then stored in the method database. Once the method parameters are entered, instrument data files may be imported for review. Upon import, the run is automatically checked against desired QC criteria, QC sample data are added to the database, and failing samples flagged appropriately. Analytes passing all QC checks are flagged for upload to the laboratory LIMS. The analyst may then review the run sample by sample, and, if desired, override the computer upload flag. An exception report may be generated detailing samples that require reanalysis

  7. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    Science.gov (United States)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  8. Medicago PhosphoProtein Database: a repository for Medicago truncatula phosphoprotein data

    Directory of Open Access Journals (Sweden)

    Christopher M. Rose

    2012-06-01

    Full Text Available The ability of legume crops to fix atmospheric nitrogen via a symbiotic association with soil rhizobia makes them an essential component of many agricultural systems. Initiation of this symbiosis requires protein phosphorylation-mediated signaling in response to rhizobial signals named Nod factors. Medicago truncatula (Medicago is the model system for studying legume biology, making the study of its phosphoproteome essential. Here, we describe the Medicago Phosphoprotein Database (http://phospho.medicago.wisc.edu, a repository built to house phosphoprotein, phosphopeptide, and phosphosite data specific to Medicago. Currently, the Medicago Phosphoprotein Database holds 3,457 unique phosphopeptides that contain 3,404 non-redundant sites of phosphorylation on 829 proteins. Through the web-based interface, users are allowed to browse identified proteins or search for proteins of interest. Furthermore, we allow users to conduct BLAST searches of the database using both peptide sequences and phosphorylation motifs as queries. The data contained within the database are available for download to be investigated at the user’s discretion. The Medicago Phosphoprotein Database will be updated continually with novel phosphoprotein and phosphopeptide identifications, with the intent of constructing an unparalleled compendium of large-scale Medicago phosphorylation data.

  9. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  10. Disease-specific health-related quality of life instruments for IgE-mediated food allergy

    DEFF Research Database (Denmark)

    Salvilla, S A; Dubois, A E J; Flokstra-de Blok, B M J

    2014-01-01

    of the impact of, and investigations and interventions for, IgE-mediated food allergy on health-related quality of life (HRQL). Using a sensitive search strategy, we searched seven electronic bibliographic databases to identify disease-specific quality of life (QOL) tools relating to IgE-mediated food allergy...

  11. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Rrsm: The European Rapid Raw Strong-Motion Database

    Science.gov (United States)

    Cauzzi, C.; Clinton, J. F.; Sleeman, R.; Domingo Ballesta, J.; Kaestli, P.; Galanis, O.

    2014-12-01

    We introduce the European Rapid Raw Strong-Motion database (RRSM), a Europe-wide system that provides parameterised strong motion information, as well as access to waveform data, within minutes of the occurrence of strong earthquakes. The RRSM significantly differs from traditional earthquake strong motion dissemination in Europe, which has focused on providing reviewed, processed strong motion parameters, typically with significant delays. As the RRSM provides rapid open access to raw waveform data and metadata and does not rely on external manual waveform processing, RRSM information is tailored to seismologists and strong-motion data analysts, earthquake and geotechnical engineers, international earthquake response agencies and the educated general public. Access to the RRSM database is via a portal at http://www.orfeus-eu.org/rrsm/ that allows users to query earthquake information, peak ground motion parameters and amplitudes of spectral response; and to select and download earthquake waveforms. All information is available within minutes of any earthquake with magnitude ≥ 3.5 occurring in the Euro-Mediterranean region. Waveform processing and database population are performed using the waveform processing module scwfparam, which is integrated in SeisComP3 (SC3; http://www.seiscomp3.org/). Earthquake information is provided by the EMSC (http://www.emsc-csem.org/) and all the seismic waveform data is accessed at the European Integrated waveform Data Archive (EIDA) at ORFEUS (http://www.orfeus-eu.org/index.html), where all on-scale data is used in the fully automated processing. As the EIDA community is continually growing, the already significant number of strong motion stations is also increasing and the importance of this product is expected to also increase. Real-time RRSM processing started in June 2014, while past events have been processed in order to provide a complete database back to 2005.

  13. Managing laboratory automation.

    Science.gov (United States)

    Saboe, T J

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.

  14. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  15. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  16. 78 FR 44142 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-07-23

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... (CBP's) plan to modify the National Customs Automation Program (NCAP) tests concerning document imaging... entry process by reducing the number of data elements required to obtain release for cargo transported...

  17. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  18. Infant feeding practices within a large electronic medical record database.

    Science.gov (United States)

    Bartsch, Emily; Park, Alison L; Young, Jacqueline; Ray, Joel G; Tu, Karen

    2018-01-02

    The emerging adoption of the electronic medical record (EMR) in primary care enables clinicians and researchers to efficiently examine epidemiological trends in child health, including infant feeding practices. We completed a population-based retrospective cohort study of 8815 singleton infants born at term in Ontario, Canada, April 2002 to March 2013. Newborn records were linked to the Electronic Medical Record Administrative data Linked Database (EMRALD™), which uses patient-level information from participating family practice EMRs across Ontario. We assessed exclusive breastfeeding patterns using an automated electronic search algorithm, with manual review of EMRs when the latter was not possible. We examined the rate of breastfeeding at visits corresponding to 2, 4 and 6 months of age, as well as sociodemographic factors associated with exclusive breastfeeding. Of the 8815 newborns, 1044 (11.8%) lacked breastfeeding information in their EMR. Rates of exclusive breastfeeding were 39.5% at 2 months, 32.4% at 4 months and 25.1% at 6 months. At age 6 months, exclusive breastfeeding rates were highest among mothers aged ≥40 vs. database.

  19. Adaptive Automation Design and Implementation

    Science.gov (United States)

    2015-09-17

    with an automated system to a real-world adaptive au- tomation system implementation. There have been plenty of adaptive automation 17 Adaptive...of systems without increasing manpower requirements by allocating routine tasks to automated aids, improving safety through the use of au- tomated ...between intermediate levels of au- tomation , explicitly defining which human task a given level automates. Each model aids the creation and classification

  20. Layered distributed architecture for plant automation

    International Nuclear Information System (INIS)

    Aravamuthan, G.; Verma, Yachika; Ranjan, Jyoti; Chachondia, Alka S.; Ganesh, G.

    2005-01-01

    The development of plant automation system and associated software remains one of the greatest challenges to the widespread implementation of highly adaptive re-configurable automation technology. This paper presents a layered distributed architecture for a plant automation system designed to support rapid reconfiguration and redeployment of automation components. The paper first presents evolution of automation architecture and their associated environment in the past few decades and then presents the concept of layered system architecture and the use of automation components to support the construction of a wide variety of automation system. It also highlights the role of standards and technology, which can be used in the development of automation components. We have attempted to adhere to open standards and technology for the development of automation component at a various layers. It also highlights the application of this concept in the development of an Operator Information System (OIS) for Advanced Heavy Water Reactor (AHWR). (author)

  1. Automation of GIS-Based Population Data-Collection for Transportation Risk Analysis

    International Nuclear Information System (INIS)

    MILLS, G. SCOTT; NEUHAUSER, SIEGLINDE

    1999-01-01

    Estimation of the potential radiological risks associated with highway transport of radioactive materials (RAM) requires input data describing population densities adjacent to all portions of the route to be traveled. Previously, aggregated risks for entire multi-state routes were adequately estimated from population data with low geographic resolution. Current demands for geographically-specific risk estimates require similar increases in resolution of population density adjacent to route segments. With the advent of commercial geographic information systems (GISs) and databases describing highways, U.S. Census Blocks, and other information that is geographically distributed, it became feasible to determine and tabulate population characteristics along transportation routes with 1-kilometer resolution. This report describes an automated method of collecting population data adjacent to route segments (for calculation of incident-free doses) based on a commercial GIS. It also describes a statistical method of resolving remaining resolution issues, and an adaptation of the automation method to collection of data on population under a hypothetical plume of contamination resulting from a potential transportation accident

  2. Integration of Narrative Processing, Data Fusion, and Database Updating Techniques in an Automated System.

    Science.gov (United States)

    1981-10-29

    are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie

  3. Semantics-based Automated Web Testing

    Directory of Open Access Journals (Sweden)

    Hai-Feng Guo

    2015-08-01

    Full Text Available We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-directed strategies. A real-life parking website is adopted throughout the paper to demonstrate the effectivity of our semantics-based web testing approach.

  4. Automation in organizations: Eternal conflict

    Science.gov (United States)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  5. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  6. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  7. Mobile home automation-merging mobile value added services and home automation technologies

    OpenAIRE

    Rosendahl, Andreas; Hampe, Felix J.; Botterweck, Goetz

    2007-01-01

    non-peer-reviewed In this paper we study mobile home automation, a field that emerges from an integration of mobile application platforms and home automation technologies. In a conceptual introduction we first illustrate the need for such applications by introducing a two-dimensional conceptual model of mobility. Subsequently we suggest an architecture and discuss different options of how a user might access a mobile home automation service and the controlled devices. As another contrib...

  8. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  9. A new fully automated FTIR system for total column measurements of greenhouse gases

    Directory of Open Access Journals (Sweden)

    M. C. Geibel

    2010-10-01

    Full Text Available This article introduces a new fully automated FTIR system that is part of the Total Carbon Column Observing Network (TCCON. It will provide continuous ground-based measurements of column-averaged volume mixing ratio for CO2, CH4 and several other greenhouse gases in the tropics.

    Housed in a 20-foot shipping container it was developed as a transportable system that could be deployed almost anywhere in the world. We describe the automation concept which relies on three autonomous subsystems and their interaction. Crucial components like a sturdy and reliable solar tracker dome are described in detail. The automation software employs a new approach relying on multiple processes, database logging and web-based remote control.

    First results of total column measurements at Jena, Germany show that the instrument works well and can provide parts of the diurnal as well as seasonal cycle for CO2. Instrument line shape measurements with an HCl cell suggest that the instrument stays well-aligned over several months.

    After a short test campaign for side by side intercomaprison with an existing TCCON instrument in Australia, the system will be transported to its final destination Ascension Island.

  10. Future Trends in Process Automation

    OpenAIRE

    Jämsä-Jounela, Sirkka-Liisa

    2007-01-01

    The importance of automation in the process industries has increased dramatically in recent years. In the highly industrialized countries, process automation serves to enhance product quality, master the whole range of products, improve process safety and plant availability, efficiently utilize resources and lower emissions. In the rapidly developing countries, mass production is the main motivation for applying process automation. The greatest demand for process automation is in the chemical...

  11. The volatile compound BinBase mass spectral database.

    Science.gov (United States)

    Skogerson, Kirsten; Wohlgemuth, Gert; Barupal, Dinesh K; Fiehn, Oliver

    2011-08-04

    Volatile compounds comprise diverse chemical groups with wide-ranging sources and functions. These compounds originate from major pathways of secondary metabolism in many organisms and play essential roles in chemical ecology in both plant and animal kingdoms. In past decades, sampling methods and instrumentation for the analysis of complex volatile mixtures have improved; however, design and implementation of database tools to process and store the complex datasets have lagged behind. The volatile compound BinBase (vocBinBase) is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity) from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species). Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http://vocbinbase.fiehnlab.ucdavis.edu). The Bin

  12. The volatile compound BinBase mass spectral database

    Directory of Open Access Journals (Sweden)

    Barupal Dinesh K

    2011-08-01

    Full Text Available Abstract Background Volatile compounds comprise diverse chemical groups with wide-ranging sources and functions. These compounds originate from major pathways of secondary metabolism in many organisms and play essential roles in chemical ecology in both plant and animal kingdoms. In past decades, sampling methods and instrumentation for the analysis of complex volatile mixtures have improved; however, design and implementation of database tools to process and store the complex datasets have lagged behind. Description The volatile compound BinBase (vocBinBase is an automated peak annotation and database system developed for the analysis of GC-TOF-MS data derived from complex volatile mixtures. The vocBinBase DB is an extension of the previously reported metabolite BinBase software developed to track and identify derivatized metabolites. The BinBase algorithm uses deconvoluted spectra and peak metadata (retention index, unique ion, spectral similarity, peak signal-to-noise ratio, and peak purity from the Leco ChromaTOF software, and annotates peaks using a multi-tiered filtering system with stringent thresholds. The vocBinBase algorithm assigns the identity of compounds existing in the database. Volatile compound assignments are supported by the Adams mass spectral-retention index library, which contains over 2,000 plant-derived volatile compounds. Novel molecules that are not found within vocBinBase are automatically added using strict mass spectral and experimental criteria. Users obtain fully annotated data sheets with quantitative information for all volatile compounds for studies that may consist of thousands of chromatograms. The vocBinBase database may also be queried across different studies, comprising currently 1,537 unique mass spectra generated from 1.7 million deconvoluted mass spectra of 3,435 samples (18 species. Mass spectra with retention indices and volatile profiles are available as free download under the CC-BY agreement (http

  13. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  14. 76 FR 34246 - Automated Commercial Environment (ACE); Announcement of National Customs Automation Program Test...

    Science.gov (United States)

    2011-06-13

    ... Environment (ACE); Announcement of National Customs Automation Program Test of Automated Procedures for In... Customs Automation Program (NCAP) test relating to highway movements of commercial goods that are transported in-bond through the United States from one point in Canada to another point in Canada. The NCAP...

  15. A realization of an automated data flow for data collecting, processing, storing and retrieving

    International Nuclear Information System (INIS)

    Friedsam, H.; Pushor, R.; Ruland, R.

    1986-11-01

    GEONET is a database system developed at the Stanford Linear Accelerator Center for the alignment of the Stanford Linear Collider. It features an automated data flow, ranging from data collection using HP110 handheld computers to processing, storing and retrieving data and finally to adjusted coordinates. This paper gives a brief introduction to the SLC project and the applied survey methods. It emphasizes the hardware and software implementation of GEONET using a network of IBM PC/XT's. 14 refs., 4 figs

  16. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  17. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  18. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  19. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  20. - GEONET - A Realization of an Automated Data Flow for Data Collecting, Processing, Storing, and Retrieving

    International Nuclear Information System (INIS)

    Friedsam, Horst; Pushor, Robert; Ruland, Robert; SLAC

    2005-01-01

    GEONET is a database system developed at the Stanford Linear Accelerator Center for the alignment of the Stanford Linear Collider. It features an automated data flow, ranging from data collection using HP110 handheld computers to processing, storing and retrieving data and finally to adjusted coordinates. This paper gives a brief introduction to the SLC project and the applied survey methods. It emphasizes the hardware and software implementation of GEONET using a network of IBM PC/XT's

  1. Applying causal mediation analysis to personality disorder research.

    Science.gov (United States)

    Walters, Glenn D

    2018-01-01

    This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Automated derivation of failure symptoms for diagnosis of nuclear plant

    International Nuclear Information System (INIS)

    Washio, T.; Kitamura, M.; Kotajima, K.; Sugiyama, K.

    1986-01-01

    A method of automated derivation of failure symptoms was developed as an approach to computer-aided failure diagnosis in a nuclear power plant. The automated derivation is realized using a knowledge representation called the semantic network (S-net). The purpose of this paper is to demonstrate the applicability of the S-net representation as a basic tool for deriving failure symptoms. If one can generate symptoms automatically, the computer-aided plant safety analysis and diagnosis can be performed easily by evaluating the influence of the failures on the whole plant. A specific description format called a 'network list' was introduced to implement the knowledge of the structure of the plant. The failure symptoms are derived automatically, based on the knowledge of the structure of the plant, using a PROLOG-based database handling system. This approach allows us to derive the failure symptoms of the plant without using conventional event-chain models (e.g. a cause-consequence tree) which are subject to human errors in their design and implementation. Applicability of this method was evaluated with a simulation model of the dynamics of the secondary system of a PWR. (author)

  3. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  4. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  5. Using Automated Planning for Traffic Signals Control

    Directory of Open Access Journals (Sweden)

    Matija Gulić

    2016-08-01

    Full Text Available Solving traffic congestions represents a high priority issue in many big cities. Traditional traffic control systems are mainly based on pre-programmed, reactive and local techniques. This paper presents an autonomic system that uses automated planning techniques instead. These techniques are easily configurable and modified, and can reason about the future implications of actions that change the default traffic lights behaviour. The proposed implemented system includes some autonomic properties, since it monitors the current traffic state, detects if the system is degrading its performance, sets up new sets of goals to be achieved by the planner, triggers the planner that generates plans with control actions, and executes the selected courses of actions. The obtained results in several artificial and real world data-based simulation scenarios show that the proposed system can efficiently solve traffic congestion.

  6. The Automation-by-Expertise-by-Training Interaction.

    Science.gov (United States)

    Strauch, Barry

    2017-03-01

    I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.

  7. Consistency and accuracy of diagnostic cancer codes generated by automated registration: comparison with manual registration

    Directory of Open Access Journals (Sweden)

    Codazzi Tiziana

    2006-09-01

    Full Text Available Abstract Background Automated procedures are increasingly used in cancer registration, and it is important that the data produced are systematically checked for consistency and accuracy. We evaluated an automated procedure for cancer registration adopted by the Lombardy Cancer Registry in 1997, comparing automatically-generated diagnostic codes with those produced manually over one year (1997. Methods The automatically generated cancer cases were produced by Open Registry algorithms. For manual registration, trained staff consulted clinical records, pathology reports and death certificates. The social security code, present and checked in both databases in all cases, was used to match the files in the automatic and manual databases. The cancer cases generated by the two methods were compared by manual revision. Results The automated procedure generated 5027 cases: 2959 (59% were accepted automatically and 2068 (41% were flagged for manual checking. Among the cases accepted automatically, discrepancies in data items (surname, first name, sex and date of birth constituted 8.5% of cases, and discrepancies in the first three digits of the ICD-9 code constituted 1.6%. Among flagged cases, cancers of female genital tract, hematopoietic system, metastatic and ill-defined sites, and oropharynx predominated. The usual reasons were use of specific vs. generic codes, presence of multiple primaries, and use of extranodal vs. nodal codes for lymphomas. The percentage of automatically accepted cases ranged from 83% for breast and thyroid cancers to 13% for metastatic and ill-defined cancer sites. Conclusion Since 59% of cases were accepted automatically and contained relatively few, mostly trivial discrepancies, the automatic procedure is efficient for routine case generation effectively cutting the workload required for routine case checking by this amount. Among cases not accepted automatically, discrepancies were mainly due to variations in coding practice.

  8. A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology.

    Science.gov (United States)

    Al-Fahdawi, Shumoos; Qahwaji, Rami; Al-Waisy, Alaa S; Ipson, Stanley; Ferdousi, Maryam; Malik, Rayaz A; Brahma, Arun

    2018-07-01

    Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  10. Automated detection of lung nodules in low-dose computed tomography

    International Nuclear Information System (INIS)

    Cascio, D.; Cheran, S.C.; Chincarini, A.; De Nunzio, G.; Delogu, P.; Fantacci, M.E.; Gargano, G.; Gori, I.; Retico, A.; Masala, G.L.; Preite Martinez, A.; Santoro, M.; Spinelli, C.; Tarantino, T.

    2007-01-01

    A computer-aided detection (CAD) system for the identification of pulmonary nodules in low-dose multi-detector computed-tomography (CT) images has been developed in the framework of the MAGIC-5 Italian project. One of the main goals of this project is to build a distributed database of lung CT scans in order to enable automated image analysis through a data and cpu GRID infrastructure. The basic modules of our lung-CAD system, consisting in a 3D dot-enhancement filter for nodule detection and a neural classifier for false-positive finding reduction, are described. The system was designed and tested for both internal and sub-pleural nodules. The database used in this study consists of 17 low-dose CT scans reconstructed with thin slice thickness (∝300 slices/scan). The preliminary results are shown in terms of the FROC analysis reporting a good sensitivity (85% range) for both internal and sub-pleural nodules at an acceptable level of false positive findings (1-9 FP/scan); the sensitivity value remains very high (75% range) even at 1-6 FP/scan. (orig.)

  11. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  12. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  13. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  14. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    Science.gov (United States)

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  15. Quality Control Algorithms for the Kennedy Space Center 50-Megahertz Doppler Radar Wind Profiler Winds Database

    Science.gov (United States)

    Barbre, Robert E., Jr.

    2012-01-01

    This paper presents the process used by the Marshall Space Flight Center Natural Environments Branch (EV44) to quality control (QC) data from the Kennedy Space Center's 50-MHz Doppler Radar Wind Profiler for use in vehicle wind loads and steering commands. The database has been built to mitigate limitations of using the currently archived databases from weather balloons. The DRWP database contains wind measurements from approximately 2.7-18.6 km altitude at roughly five minute intervals for the August 1997 to December 2009 period of record, and the extensive QC process was designed to remove spurious data from various forms of atmospheric and non-atmospheric artifacts. The QC process is largely based on DRWP literature, but two new algorithms have been developed to remove data contaminated by convection and excessive first guess propagations from the Median Filter First Guess Algorithm. In addition to describing the automated and manual QC process in detail, this paper describes the extent of the data retained. Roughly 58% of all possible wind observations exist in the database, with approximately 100 times as many complete profile sets existing relative to the EV44 balloon databases. This increased sample of near-continuous wind profile measurements may help increase launch availability by reducing the uncertainty of wind changes during launch countdown

  16. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  17. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  18. Manual versus automated coding of free-text self-reported medication data in the 45 and Up Study: a validation study.

    Science.gov (United States)

    Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily

    2015-03-30

    Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.

  19. Human-centred automation: an explorative study

    International Nuclear Information System (INIS)

    Hollnagel, Erik; Miberg, Ann Britt

    1999-05-01

    The purpose of the programme activity on human-centred automation at the HRP is to develop knowledge (in the form of models and theories) and tools (in the form of techniques and simulators) to support design of automation that ensures effective human performance and comprehension. This report presents the work done on both the analytical and experimental side of this project. The analytical work has surveyed common definitions of automation and traditional design principles. A general finding is that human-centred automation usually is defined in terms of what it is not. This is partly due to a lack of adequate models and of human-automation interaction. Another result is a clarification of the consequences of automation, in particular with regard to situation awareness and workload. The experimental work has taken place as an explorative experiment in HAMMLAB in collaboration with IPSN (France). The purpose of this experiment was to increase the understanding of how automation influences operator performance in NPP control rooms. Two different types of automation (extensive and limited) were considered in scenarios having two different degrees of complexity (high and low), and involving diagnostic and procedural tasks. Six licensed NPP crews from the NPP at Loviisa, Finland, participated in the experiment. The dependent variables applied were plant performance, operator performance, self-rated crew performance, situation awareness, workload, and operator trust in the automation. The results from the diagnostic scenarios indicated that operators' judgement of crew efficiency was related to their level of trust in the automation, and further that operators trusted automation least and rated crew performance lowest in situations where crew performance was efficient and vice versa. The results from procedural scenarios indicated that extensive automation efficiently supported operators' performance, and further that operator' judgement of crew performance efficiency

  20. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  1. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  2. Automated electron microprobe

    International Nuclear Information System (INIS)

    Thompson, K.A.; Walker, L.R.

    1986-01-01

    The Plant Laboratory at the Oak Ridge Y-12 Plant has recently obtained a Cameca MBX electron microprobe with a Tracor Northern TN5500 automation system. This allows full stage and spectrometer automation and digital beam control. The capabilities of the system include qualitative and quantitative elemental microanalysis for all elements above and including boron in atomic number, high- and low-magnification imaging and processing, elemental mapping and enhancement, and particle size, shape, and composition analyses. Very low magnification, quantitative elemental mapping using stage control (which is of particular interest) has been accomplished along with automated size, shape, and composition analysis over a large relative area

  3. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  4. Driver Psychology during Automated Platooning

    NARCIS (Netherlands)

    Heikoop, D.D.

    2017-01-01

    With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their

  5. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  6. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  7. dbCAN2: a meta server for automated carbohydrate-active enzyme annotation

    DEFF Research Database (Denmark)

    Zhang, Han; Yohe, Tanner; Huang, Le

    2018-01-01

    of plant and plant-associated microbial genomes and metagenomes being sequenced, there is an urgent need of automatic tools for genomic data mining of CAZymes. We developed the dbCAN web server in 2012 to provide a public service for automated CAZyme annotation for newly sequenced genomes. Here, dbCAN2...... (http://cys.bios.niu.edu/dbCAN2) is presented as an updated meta server, which integrates three state-of-the-art tools for CAZome (all CAZymes of a genome) annotation: (i) HMMER search against the dbCAN HMM (hidden Markov model) database; (ii) DIAMOND search against the CAZy pre-annotated CAZyme...

  8. Fully automated treatment planning for head and neck radiotherapy using a voxel-based dose prediction and dose mimicking method

    Science.gov (United States)

    McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.

    2017-08-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment

  9. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  10. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    Science.gov (United States)

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  11. Automated QA framework for PetaScale data challenges

    International Nuclear Information System (INIS)

    Van Buren, G; Didenko, L; Lauret, J; Oldag, E; Ray, L

    2011-01-01

    Over the lifetime of the STAR Experiment, a large investment of workforce time has gone into a variety of QA efforts, including continuous processing of a portion of the data for automated calibration and iterative convergence and quality assurance purposes. A rotating workforce coupled with ever-increasing volumes of information to examine led to sometimes inconsistent or incomplete reporting of issues, eventually leading to additional work. The traditional approach of manually screening a data sample was no longer adequate and doomed to eventual failure with planned future growth in data extents. To prevent this collapse we have developed a new system employing user-defined reference histograms, permitting automated comparisons and nagging of issues. Based on the ROOT framework at its core, the front end is a web based service allowing shift personnel to visualize the results, and to set test parameters and thresholds defining success or failure. The versatile and flexible approach allows for a slew of histograms to be configured and grouped into categories (results and thresholds may depend on experimental triggers and data types) ensuring framework evolution with the years of running to come. Historical information is also saved to track changes and allow for rapid convergence of future tuning. Database storage and processing of data are handled outside the web server for security and fault tolerance.

  12. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  13. Disaster recovery plan for Automation Technology

    International Nuclear Information System (INIS)

    Owens, M.J.

    1997-06-01

    Automation Technology provides a multitude of data processing and network services to the Environmental Restoration Contract (ERC). These services include: personal computers, local and wide area networks, and Internet and intranet support and services. ERC employees and client personnel receive these services primarily from the Data Center located on the ground floor in the Bechtel Corporate Center at 3350 George Washington Way, Richland, Washington. Centralized databases, server-based software, and network services for the Bechtel Local Area Network reside on servers located in the Data Center. The data communication circuits supported in this center allow for the transmission of business information to and from all project locations in the Hanford Site complex. The loss of one or more of these functions would seriously impact the ability of the ERC to conduct business and bring a virtual standstill to many ERC employees'' activities. Upon declaration of disaster by the Contingency Manager and the Disaster Recovery Coordinator, the disaster recovery plan will be implemented. 24 tabs

  14. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  15. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  16. Automated builder and database of protein/membrane complexes for molecular dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Sunhwan Jo

    2007-09-01

    Full Text Available Molecular dynamics simulations of membrane proteins have provided deeper insights into their functions and interactions with surrounding environments at the atomic level. However, compared to solvation of globular proteins, building a realistic protein/membrane complex is still challenging and requires considerable experience with simulation software. Membrane Builder in the CHARMM-GUI website (http://www.charmm-gui.org helps users to build such a complex system using a web browser with a graphical user interface. Through a generalized and automated building process including system size determination as well as generation of lipid bilayer, pore water, bulk water, and ions, a realistic membrane system with virtually any kinds and shapes of membrane proteins can be generated in 5 minutes to 2 hours depending on the system size. Default values that were elaborated and tested extensively are given in each step to provide reasonable options and starting points for both non-expert and expert users. The efficacy of Membrane Builder is illustrated by its applications to 12 transmembrane and 3 interfacial membrane proteins, whose fully equilibrated systems with three different types of lipid molecules (DMPC, DPPC, and POPC and two types of system shapes (rectangular and hexagonal are freely available on the CHARMM-GUI website. One of the most significant advantages of using the web environment is that, if a problem is found, users can go back and re-generate the whole system again before quitting the browser. Therefore, Membrane Builder provides the intuitive and easy way to build and simulate the biologically important membrane system.

  17. Classification of Automated Search Traffic

    Science.gov (United States)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  18. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  19. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  20. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  1. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  2. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  3. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  4. Flow Giese reaction using cyanoborohydride as a radical mediator

    Directory of Open Access Journals (Sweden)

    Takahide Fukuyama

    2013-09-01

    Full Text Available Tin-free Giese reactions, employing primary, secondary, and tertiary alkyl iodides as radical precursors, ethyl acrylate as a radical trap, and sodium cyanoborohydride as a radical mediator, were examined in a continuous flow system. With the use of an automated flow microreactor, flow reaction conditions for the Giese reaction were quickly optimized, and it was found that a reaction temperature of 70 °C in combination with a residence time of 10–15 minutes gave good yields of the desired addition products.

  5. Redefining the Practice of Peer Review Through Intelligent Automation-Part 3: Automated Report Analysis and Data Reconciliation.

    Science.gov (United States)

    Reiner, Bruce I

    2018-02-01

    One method for addressing existing peer review limitations is the assignment of peer review cases on a completely blinded basis, in which the peer reviewer would create an independent report which can then be cross-referenced with the primary reader report of record. By leveraging existing computerized data mining techniques, one could in theory automate and objectify the process of report data extraction, classification, and analysis, while reducing time and resource requirements intrinsic to manual peer review report analysis. Once inter-report analysis has been performed, resulting inter-report discrepancies can be presented to the radiologist of record for review, along with the option to directly communicate with the peer reviewer through an electronic data reconciliation tool aimed at collaboratively resolving inter-report discrepancies and improving report accuracy. All associated report and reconciled data could in turn be recorded in a referenceable peer review database, which provides opportunity for context and user-specific education and decision support.

  6. Automated sample mounting and technical advance alignment system for biological crystallography at a synchrotron source

    International Nuclear Information System (INIS)

    Snell, Gyorgy; Cork, Carl; Nordmeyer, Robert; Cornell, Earl; Meigs, George; Yegian, Derek; Jaklevic, Joseph; Jin, Jian; Stevens, Raymond C.; Earnest, Thomas

    2004-01-01

    High-throughput data collection for macromolecular crystallography requires an automated sample mounting system for cryo-protected crystals that functions reliably when integrated into protein-crystallography beamlines at synchrotrons. Rapid mounting and dismounting of the samples increases the efficiency of the crystal screening and data collection processes, where many crystals can be tested for the quality of diffraction. The sample-mounting subsystem has random access to 112 samples, stored under liquid nitrogen. Results of extensive tests regarding the performance and reliability of the system are presented. To further increase throughput, we have also developed a sample transport/storage system based on 'puck-shaped' cassettes, which can hold sixteen samples each. Seven cassettes fit into a standard dry shipping Dewar. The capabilities of a robotic crystal mounting and alignment system with instrumentation control software and a relational database allows for automated screening and data collection to be developed

  7. Selecting automation for the clinical chemistry laboratory.

    Science.gov (United States)

    Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr

    2007-07-01

    Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.

  8. Interpretation of erythrocyte histograms obtained from automated hematology analyzers in hematologic diseases

    Directory of Open Access Journals (Sweden)

    Ali Maleki

    2015-12-01

    Full Text Available Background: Presently, the graphical data of blood cells (histograms and cytograms or/ scattergrams that they are usually available in all modern automated hematology analyzers are an integral a part of automated complete blood count (CBC. To find incorrect results from automated hematology analyzer and establish the samples that require additional analysis, Laboratory employees will use those data for quality control of obtaining results, to assist identification of complex and troublesome cases. Methods: During this descriptive analytic study, in addition to erythrocyte graphs from variety of patients, referring from March 2013 to Feb 2014 to our clinical laboratory, Zagros Hospital, Kermanshah, Iran, are given, the papers published in relevant literature as well as available published manuals of automatic blood cell counters were used. articles related to the key words of erythrocyte graphs and relevant literature as well as available published manuals of automatic blood cell counters were searched from valid databases such as Springer Link, google scholar, Pubmed and Sciencedirect. Then, the articles related to erythrogram, erythrocyte histogram and hematology analyzer graphs are involved in diagnosis of hematological disorder were searched and selected for this study. Results: Histograms and different automated CBC parameter become abnormal in various pathologic conditions, and can present important clues for diagnosis and treatment of hematologic and non-hematologic disorders. In several instances, these histograms have characteristic appearances in an exceedingly wide range of pathological conditions. In some hematologic disorders like iron deficiency or megaloblastic anemia, a sequential histogram can clearly show the progressive treatment and management. Conclusion: These graphical data are often accompanied by other automated CBC parameter and microscopic examination of peripheral blood smears (PBS, and can help in monitoring and

  9. A modular, prospective, semi-automated drug safety monitoring system for use in a distributed data environment.

    Science.gov (United States)

    Gagne, Joshua J; Wang, Shirley V; Rassen, Jeremy A; Schneeweiss, Sebastian

    2014-06-01

    The aim of this study was to develop and test a semi-automated process for conducting routine active safety monitoring for new drugs in a network of electronic healthcare databases. We built a modular program that semi-automatically performs cohort identification, confounding adjustment, diagnostic checks, aggregation and effect estimation across multiple databases, and application of a sequential alerting algorithm. During beta-testing, we applied the system to five databases to evaluate nine examples emulating prospective monitoring with retrospective data (five pairs for which we expected signals, two negative controls, and two examples for which it was uncertain whether a signal would be expected): cerivastatin versus atorvastatin and rhabdomyolysis; paroxetine versus tricyclic antidepressants and gastrointestinal bleed; lisinopril versus angiotensin receptor blockers and angioedema; ciprofloxacin versus macrolide antibiotics and Achilles tendon rupture; rofecoxib versus non-selective non-steroidal anti-inflammatory drugs (ns-NSAIDs) and myocardial infarction; telithromycin versus azithromycin and hepatotoxicity; rosuvastatin versus atorvastatin and diabetes and rhabdomyolysis; and celecoxib versus ns-NSAIDs and myocardial infarction. We describe the program, the necessary inputs, and the assumed data environment. In beta-testing, the system generated four alerts, all among positive control examples (i.e., lisinopril and angioedema; rofecoxib and myocardial infarction; ciprofloxacin and tendon rupture; and cerivastatin and rhabdomyolysis). Sequential effect estimates for each example were consistent in direction and magnitude with existing literature. Beta-testing across nine drug-outcome examples demonstrated the feasibility of the proposed semi-automated prospective monitoring approach. In retrospective assessments, the system identified an increased risk of myocardial infarction with rofecoxib and an increased risk of rhabdomyolysis with cerivastatin years

  10. Stimulus effects and the mediation of recognition memory.

    Science.gov (United States)

    McAdoo, Ryan M; Key, Kylie N; Gronlund, Scott D

    2018-04-19

    Two broad approaches characterize the type of evidence that mediates recognition memory: discrete state and continuous. Discrete-state models posit a thresholded memory process that provides accurate information about an item (it is detected) or, failing that, no mnemonic information about the item. Continuous models, in contrast, posit the existence of graded mnemonic information about an item. Evidence favoring 1 approach over the other has been mixed, suggesting the possibility that the mediation of recognition memory may be adaptable and influenced by other factors. We tested this possibility with 2 experiments that varied the semantic similarity of word targets and fillers. Experiment 1, which used semantically similar fillers, displayed evidence of continuous mediation (contrary to Kellen & Klauer, 2015), whereas Experiment 2, which used semantically dissimilar fillers, displayed evidence of discrete mediation. The results have implications for basic theories of recognition memory, as well as for theories of applied domains like eyewitness identification. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  12. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  13. Work Planing Automation at Mechanical Subdivision

    OpenAIRE

    Dzindzelėta, Vytautas

    2005-01-01

    Work planing automation, installation possibilities and future outlook at mechanical subdivision. To study how the work planing has changed before and after automation process and to analyse automation process methodology.

  14. An open-source toolbox for automated phenotyping of mice in behavioral tasks

    Directory of Open Access Journals (Sweden)

    Tapan P Patel

    2014-10-01

    Full Text Available Classifying behavior patterns in mouse models of neurological, psychiatric and neurodevelopmental disorders is critical for understanding disease causality and treatment. However, complete characterization of behavior is time-intensive, prone to subjective scoring, and often requires specialized equipment. Although several reports describe automated home-cage monitoring and individual task scoring methods, we report the first open source, comprehensive toolbox for automating the scoring of several common behavior tasks used by the neuroscience community. We show this new toolbox is robust and achieves equal or better consistency when compared to manual scoring methods. We use this toolbox to study the alterations in behavior that occur following blast-induced traumatic brain injury (bTBI, and study if these behavior patterns are altered following genetic deletion of the transcription factor Ets-like kinase 1 (Elk-1. Due to the role of Elk-1 in neuronal survival and proposed role in synaptic plasticity, we hypothesized that Elk-1 deletion would improve some neurobehavioral deficits, while impairing others, following blast exposure. In Elk-1 knockout animals, deficits in open field, spatial object recognition and elevated zero maze performance after blast exposure disappeared, while new significant deficits appeared in spatial and associative memory. These are the first data suggesting a molecular mediator of anxiety deficits following blast-induced traumatic brain injury, and represent the utility of the broad screening tool we developed. More broadly, we envision this open-source toolbox will provide a more consistent and rapid analysis of behavior across many neurological diseases, promoting the rapid discovery of novel pathways mediating disease progression and treatment.

  15. Managing Data, Provenance and Chaos through Standardization and Automation at the Georgia Coastal Ecosystems LTER Site

    Science.gov (United States)

    Sheldon, W.

    2013-12-01

    Managing data for a large, multidisciplinary research program such as a Long Term Ecological Research (LTER) site is a significant challenge, but also presents unique opportunities for data stewardship. LTER research is conducted within multiple organizational frameworks (i.e. a specific LTER site as well as the broader LTER network), and addresses both specific goals defined in an NSF proposal as well as broader goals of the network; therefore, every LTER data can be linked to rich contextual information to guide interpretation and comparison. The challenge is how to link the data to this wealth of contextual metadata. At the Georgia Coastal Ecosystems LTER we developed an integrated information management system (GCE-IMS) to manage, archive and distribute data, metadata and other research products as well as manage project logistics, administration and governance (figure 1). This system allows us to store all project information in one place, and provide dynamic links through web applications and services to ensure content is always up to date on the web as well as in data set metadata. The database model supports tracking changes over time in personnel roles, projects and governance decisions, allowing these databases to serve as canonical sources of project history. Storing project information in a central database has also allowed us to standardize both the formatting and content of critical project information, including personnel names, roles, keywords, place names, attribute names, units, and instrumentation, providing consistency and improving data and metadata comparability. Lookup services for these standard terms also simplify data entry in web and database interfaces. We have also coupled the GCE-IMS to our MATLAB- and Python-based data processing tools (i.e. through database connections) to automate metadata generation and packaging of tabular and GIS data products for distribution. Data processing history is automatically tracked throughout the data

  16. Physiological Self-Regulation and Adaptive Automation

    Science.gov (United States)

    Prinzell, Lawrence J.; Pope, Alan T.; Freeman, Frederick G.

    2007-01-01

    Adaptive automation has been proposed as a solution to current problems of human-automation interaction. Past research has shown the potential of this advanced form of automation to enhance pilot engagement and lower cognitive workload. However, there have been concerns voiced regarding issues, such as automation surprises, associated with the use of adaptive automation. This study examined the use of psychophysiological self-regulation training with adaptive automation that may help pilots deal with these problems through the enhancement of cognitive resource management skills. Eighteen participants were assigned to 3 groups (self-regulation training, false feedback, and control) and performed resource management, monitoring, and tracking tasks from the Multiple Attribute Task Battery. The tracking task was cycled between 3 levels of task difficulty (automatic, adaptive aiding, manual) on the basis of the electroencephalogram-derived engagement index. The other two tasks remained in automatic mode that had a single automation failure. Those participants who had received self-regulation training performed significantly better and reported lower National Aeronautics and Space Administration Task Load Index scores than participants in the false feedback and control groups. The theoretical and practical implications of these results for adaptive automation are discussed.

  17. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  18. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  19. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  20. Buying Program of the Standard Automated Materiel Management System. Automated Small Purchase System: Defense Supply Center Philadelphia

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard Automated Materiel Management System Automated Small Purchase System is a fully automated micro-purchases system used by the General and Industrial Directorate at the Defense Supply Center Philadelphia...

  1. libChEBI: an API for accessing the ChEBI database.

    Science.gov (United States)

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  2. Designing for Peta-Scale in the LSST Database

    Science.gov (United States)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  3. Comparison of Size Modulation Standard Automated Perimetry and Conventional Standard Automated Perimetry with a 10-2 Test Program in Glaucoma Patients.

    Science.gov (United States)

    Hirasawa, Kazunori; Takahashi, Natsumi; Satou, Tsukasa; Kasahara, Masayuki; Matsumura, Kazuhiro; Shoji, Nobuyuki

    2017-08-01

    This prospective observational study compared the performance of size modulation standard automated perimetry with the Octopus 600 10-2 test program, with stimulus size modulation during testing, based on stimulus intensity and conventional standard automated perimetry, with that of the Humphrey 10-2 test program in glaucoma patients. Eighty-seven eyes of 87 glaucoma patients underwent size modulation standard automated perimetry with Dynamic strategy and conventional standard automated perimetry using the SITA standard strategy. The main outcome measures were global indices, point-wise threshold, visual defect size and depth, reliability indices, and test duration; these were compared between size modulation standard automated perimetry and conventional standard automated perimetry. Global indices and point-wise threshold values between size modulation standard automated perimetry and conventional standard automated perimetry were moderately to strongly correlated (p 33.40, p modulation standard automated perimetry than with conventional standard automated perimetry, but the visual-field defect size was smaller (p modulation-standard automated perimetry than on conventional standard automated perimetry. The reliability indices, particularly the false-negative response, of size modulation standard automated perimetry were worse than those of conventional standard automated perimetry (p modulation standard automated perimetry than with conventional standard automated perimetry (p = 0.02). Global indices and the point-wise threshold value of the two testing modalities correlated well. However, the potential of a large stimulus presented at an area with a decreased sensitivity with size modulation standard automated perimetry could underestimate the actual threshold in the 10-2 test protocol, as compared with conventional standard automated perimetry.

  4. Development strategy and process models for phased automation of design and digital manufacturing electronics

    Science.gov (United States)

    Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.

    2018-03-01

    The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.

  5. Geena 2, improved automated analysis of MALDI/TOF mass spectra.

    Science.gov (United States)

    Romano, Paolo; Profumo, Aldo; Rocco, Mattia; Mangerini, Rosa; Ferri, Fabio; Facchiano, Angelo

    2016-03-02

    Mass spectrometry (MS) is producing high volumes of data supporting oncological sciences, especially for translational research. Most of related elaborations can be carried out by combining existing tools at different levels, but little is currently available for the automation of the fundamental steps. For the analysis of MALDI/TOF spectra, a number of pre-processing steps are required, including joining of isotopic abundances for a given molecular species, normalization of signals against an internal standard, background noise removal, averaging multiple spectra from the same sample, and aligning spectra from different samples. In this paper, we present Geena 2, a public software tool for the automated execution of these pre-processing steps for MALDI/TOF spectra. Geena 2 has been developed in a Linux-Apache-MySQL-PHP web development environment, with scripts in PHP and Perl. Input and output are managed as simple formats that can be consumed by any database system and spreadsheet software. Input data may also be stored in a MySQL database. Processing methods are based on original heuristic algorithms which are introduced in the paper. Three simple and intuitive web interfaces are available: the Standard Search Interface, which allows a complete control over all parameters, the Bright Search Interface, which leaves to the user the possibility to tune parameters for alignment of spectra, and the Quick Search Interface, which limits the number of parameters to a minimum by using default values for the majority of parameters. Geena 2 has been utilized, in conjunction with a statistical analysis tool, in three published experimental works: a proteomic study on the effects of long-term cryopreservation on the low molecular weight fraction of serum proteome, and two retrospective serum proteomic studies, one on the risk of developing breat cancer in patients affected by gross cystic disease of the breast (GCDB) and the other for the identification of a predictor of

  6. Laboratory automation and LIMS in forensics

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Anders Johannes; Morling, Niels

    2013-01-01

    . Furthermore, implementation of automated liquid handlers reduces the risk of sample misplacement. A LIMS can efficiently control the sample flow through the laboratory and manage the results of the conducted tests for each sample. Integration of automated liquid handlers with a LIMS provides the laboratory......Implementation of laboratory automation and LIMS in a forensic laboratory enables the laboratory, to standardize sample processing. Automated liquid handlers can increase throughput and eliminate manual repetitive pipetting operations, known to result in occupational injuries to the technical staff...... with the tools required for setting up automated production lines of complex laboratory processes and monitoring the whole process and the results. Combined, this enables processing of a large number of samples. Selection of the best automated solution for an individual laboratory should be based on user...

  7. Automation and robotics

    Science.gov (United States)

    Montemerlo, Melvin

    1988-01-01

    The Autonomous Systems focus on the automation of control systems for the Space Station and mission operations. Telerobotics focuses on automation for in-space servicing, assembly, and repair. The Autonomous Systems and Telerobotics each have a planned sequence of integrated demonstrations showing the evolutionary advance of the state-of-the-art. Progress is briefly described for each area of concern.

  8. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1986-01-01

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 10 9 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author) [pt

  9. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  10. A computer vision-based automated Figure-8 maze for working memory test in rodents.

    Science.gov (United States)

    Pedigo, Samuel F; Song, Eun Young; Jung, Min Whan; Kim, Jeansok J

    2006-09-30

    The benchmark test for prefrontal cortex (PFC)-mediated working memory in rodents is a delayed alternation task utilizing variations of T-maze or Figure-8 maze, which requires the animals to make specific arm entry responses for reward. In this task, however, manual procedures involved in shaping target behavior, imposing delays between trials and delivering rewards can potentially influence the animal's performance on the maze. Here, we report an automated Figure-8 maze which does not necessitate experimenter-subject interaction during shaping, training or testing. This system incorporates a computer vision system for tracking, motorized gates to impose delays, and automated reward delivery. The maze is controlled by custom software that records the animal's location and activates the gates according to the animal's behavior and a control algorithm. The program performs calculations of task accuracy, tracks movement sequence through the maze, and provides other dependent variables (such as running speed, time spent in different maze locations, activity level during delay). Testing in rats indicates that the performance accuracy is inversely proportional to the delay interval, decreases with PFC lesions, and that animals anticipate timing during long delays. Thus, our automated Figure-8 maze is effective at assessing working memory and provides novel behavioral measures in rodents.

  11. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  12. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  13. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  14. Automated PCB Inspection System

    Directory of Open Access Journals (Sweden)

    Syed Usama BUKHARI

    2017-05-01

    Full Text Available Development of an automated PCB inspection system as per the need of industry is a challenging task. In this paper a case study is presented, to exhibit, a proposed system for an immigration process of a manual PCB inspection system to an automated PCB inspection system, with a minimal intervention on the existing production flow, for a leading automotive manufacturing company. A detailed design of the system, based on computer vision followed by testing and analysis was proposed, in order to aid the manufacturer in the process of automation.

  15. Two Automated Techniques for Carotid Lumen Diameter Measurement: Regional versus Boundary Approaches.

    Science.gov (United States)

    Araki, Tadashi; Kumar, P Krishna; Suri, Harman S; Ikeda, Nobutaka; Gupta, Ajay; Saba, Luca; Rajan, Jeny; Lavra, Francesco; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Suri, Jasjit S

    2016-07-01

    The degree of stenosis in the carotid artery can be predicted using automated carotid lumen diameter (LD) measured from B-mode ultrasound images. Systolic velocity-based methods for measurement of LD are subjective. With the advancement of high resolution imaging, image-based methods have started to emerge. However, they require robust image analysis for accurate LD measurement. This paper presents two different algorithms for automated segmentation of the lumen borders in carotid ultrasound images. Both algorithms are modeled as a two stage process. Stage one consists of a global-based model using scale-space framework for the extraction of the region of interest. This stage is common to both algorithms. Stage two is modeled using a local-based strategy that extracts the lumen interfaces. At this stage, the algorithm-1 is modeled as a region-based strategy using a classification framework, whereas the algorithm-2 is modeled as a boundary-based approach that uses the level set framework. Two sets of databases (DB), Japan DB (JDB) (202 patients, 404 images) and Hong Kong DB (HKDB) (50 patients, 300 images) were used in this study. Two trained neuroradiologists performed manual LD tracings. The mean automated LD measured was 6.35 ± 0.95 mm for JDB and 6.20 ± 1.35 mm for HKDB. The precision-of-merit was: 97.4 % and 98.0 % w.r.t to two manual tracings for JDB and 99.7 % and 97.9 % w.r.t to two manual tracings for HKDB. Statistical tests such as ANOVA, Chi-Squared, T-test, and Mann-Whitney test were conducted to show the stability and reliability of the automated techniques.

  16. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  17. Automation of Electrical Cable Harnesses Testing

    Directory of Open Access Journals (Sweden)

    Zhuming Bi

    2017-12-01

    Full Text Available Traditional automated systems, such as industrial robots, are applied in well-structured environments, and many automated systems have a limited adaptability to deal with complexity and uncertainty; therefore, the applications of industrial robots in small- and medium-sized enterprises (SMEs are very limited. The majority of manual operations in SMEs are too complicated for automation. The rapidly developed information technologies (IT has brought new opportunities for the automation of manufacturing and assembly processes in the ill-structured environments. Note that an automation solution should be designed to meet the given requirements of the specified application, and it differs from one application to another. In this paper, we look into the feasibility of automated testing for electric cable harnesses, and our focus is on some of the generic strategies for the improvement of the adaptability of automation solutions. Especially, the concept of modularization is adopted in developing hardware and software to maximize system adaptability in testing a wide scope of products. A proposed system has been implemented, and the system performances have been evaluated by executing tests on actual products. The testing experiments have shown that the automated system outperformed manual operations greatly in terms of cost-saving, productivity and reliability. Due to the potential of increasing system adaptability and cost reduction, the presented work has its theoretical and practical significance for an extension for other automation solutions in SMEs.

  18. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  19. Prototype semantic infrastructure for automated small molecule classification and annotation in lipidomics.

    Science.gov (United States)

    Chepelev, Leonid L; Riazanov, Alexandre; Kouznetsov, Alexandre; Low, Hong Sang; Dumontier, Michel; Baker, Christopher J O

    2011-07-26

    The development of high-throughput experimentation has led to astronomical growth in biologically relevant lipids and lipid derivatives identified, screened, and deposited in numerous online databases. Unfortunately, efforts to annotate, classify, and analyze these chemical entities have largely remained in the hands of human curators using manual or semi-automated protocols, leaving many novel entities unclassified. Since chemical function is often closely linked to structure, accurate structure-based classification and annotation of chemical entities is imperative to understanding their functionality. As part of an exploratory study, we have investigated the utility of semantic web technologies in automated chemical classification and annotation of lipids. Our prototype framework consists of two components: an ontology and a set of federated web services that operate upon it. The formal lipid ontology we use here extends a part of the LiPrO ontology and draws on the lipid hierarchy in the LIPID MAPS database, as well as literature-derived knowledge. The federated semantic web services that operate upon this ontology are deployed within the Semantic Annotation, Discovery, and Integration (SADI) framework. Structure-based lipid classification is enacted by two core services. Firstly, a structural annotation service detects and enumerates relevant functional groups for a specified chemical structure. A second service reasons over lipid ontology class descriptions using the attributes obtained from the annotation service and identifies the appropriate lipid classification. We extend the utility of these core services by combining them with additional SADI services that retrieve associations between lipids and proteins and identify publications related to specified lipid types. We analyze the performance of SADI-enabled eicosanoid classification relative to the LIPID MAPS classification and reflect on the contribution of our integrative methodology in the context of

  20. Prototype semantic infrastructure for automated small molecule classification and annotation in lipidomics

    Directory of Open Access Journals (Sweden)

    Dumontier Michel

    2011-07-01

    Full Text Available Abstract Background The development of high-throughput experimentation has led to astronomical growth in biologically relevant lipids and lipid derivatives identified, screened, and deposited in numerous online databases. Unfortunately, efforts to annotate, classify, and analyze these chemical entities have largely remained in the hands of human curators using manual or semi-automated protocols, leaving many novel entities unclassified. Since chemical function is often closely linked to structure, accurate structure-based classification and annotation of chemical entities is imperative to understanding their functionality. Results As part of an exploratory study, we have investigated the utility of semantic web technologies in automated chemical classification and annotation of lipids. Our prototype framework consists of two components: an ontology and a set of federated web services that operate upon it. The formal lipid ontology we use here extends a part of the LiPrO ontology and draws on the lipid hierarchy in the LIPID MAPS database, as well as literature-derived knowledge. The federated semantic web services that operate upon this ontology are deployed within the Semantic Annotation, Discovery, and Integration (SADI framework. Structure-based lipid classification is enacted by two core services. Firstly, a structural annotation service detects and enumerates relevant functional groups for a specified chemical structure. A second service reasons over lipid ontology class descriptions using the attributes obtained from the annotation service and identifies the appropriate lipid classification. We extend the utility of these core services by combining them with additional SADI services that retrieve associations between lipids and proteins and identify publications related to specified lipid types. We analyze the performance of SADI-enabled eicosanoid classification relative to the LIPID MAPS classification and reflect on the contribution of

  1. Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.

    Science.gov (United States)

    Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P

    2016-04-01

    Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.

  2. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  3. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  4. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  5. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  6. Automating spectral measurements

    Science.gov (United States)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  7. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  8. Ask the experts: automation: part I.

    Science.gov (United States)

    Allinson, John L; Blick, Kenneth E; Cohen, Lucinda; Higton, David; Li, Ming

    2013-08-01

    Bioanalysis invited a selection of leading researchers to express their views on automation in the bioanalytical laboratory. The topics discussed include the challenges that the modern bioanalyst faces when integrating automation into existing drug-development processes, the impact of automation and how they envision the modern bioanalytical laboratory changing in the near future. Their enlightening responses provide a valuable insight into the impact of automation and the future of the constantly evolving bioanalytical laboratory.

  9. An Automation Survival Guide for Media Centers.

    Science.gov (United States)

    Whaley, Roger E.

    1989-01-01

    Reviews factors that should affect the decision to automate a school media center and offers suggestions for the automation process. Topics discussed include getting the library collection ready for automation, deciding what automated functions are needed, evaluating software vendors, selecting software, and budgeting. (CLB)

  10. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  11. Mindboggle: Automated brain labeling with multiple atlases

    International Nuclear Information System (INIS)

    Klein, Arno; Mensh, Brett; Ghosh, Satrajit; Tourville, Jason; Hirsch, Joy

    2005-01-01

    To make inferences about brain structures or activity across multiple individuals, one first needs to determine the structural correspondences across their image data. We have recently developed Mindboggle as a fully automated, feature-matching approach to assign anatomical labels to cortical structures and activity in human brain MRI data. Label assignment is based on structural correspondences between labeled atlases and unlabeled image data, where an atlas consists of a set of labels manually assigned to a single brain image. In the present work, we study the influence of using variable numbers of individual atlases to nonlinearly label human brain image data. Each brain image voxel of each of 20 human subjects is assigned a label by each of the remaining 19 atlases using Mindboggle. The most common label is selected and is given a confidence rating based on the number of atlases that assigned that label. The automatically assigned labels for each subject brain are compared with the manual labels for that subject (its atlas). Unlike recent approaches that transform subject data to a labeled, probabilistic atlas space (constructed from a database of atlases), Mindboggle labels a subject by each atlas in a database independently. When Mindboggle labels a human subject's brain image with at least four atlases, the resulting label agreement with coregistered manual labels is significantly higher than when only a single atlas is used. Different numbers of atlases provide significantly higher label agreements for individual brain regions. Increasing the number of reference brains used to automatically label a human subject brain improves labeling accuracy with respect to manually assigned labels. Mindboggle software can provide confidence measures for labels based on probabilistic assignment of labels and could be applied to large databases of brain images

  12. Demands on digital automation; Anforderungen an die Digitale Automation

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.

    1995-12-31

    In chapter 12 of the anthology about building control the demands on digital automation are presented. The following aspects are discussed: variety of the companies` philosophies, demands of the customer/investor, demands of the use of buildings/rooms, the user, point of view of manufacturer of technical plants. (BWI) [Deutsch] Kapitel 12 des Sammelbandes ueber Building Control stellt die Anforderungen an die Digitale Automation vor. In diesem Zusammenhang wird auf folgende Themenbereiche eingegangen: Spektrum der Firmenphilosophien, Forderungen der Auftraggeber/Investoren, der Gebaeude-/Raumnutzung, der Betreiber sowie Sicht der Ersteller betriebstechnischer Anlagen. (BWI)

  13. Automated radiosynthesis of no-carrier-added 4-[18F]fluoroiodobenzene: a versatile building block in 18F radiochemistry.

    Science.gov (United States)

    Way, Jenilee Dawn; Wuest, Frank

    2014-02-01

    4-[18F]Fluoroiodobenzene ([18F]FIB) is a versatile building block in 18F radiochemistry used in various transition metal-mediated C-C and C-N cross-coupling reactions and [18F]fluoroarylation reactions. Various synthesis routes have been described for the preparation of [18F]FIB. However, to date, no automated synthesis of [18F]FIB has been reported to allow access to larger amounts of [18F]FIB in high radiochemical and chemical purity. Herein, we describe an automated synthesis of no-carrier-added [18F]FIB on a GE TRACERlab™ FX automated synthesis unit starting from commercially available(4-iodophenyl)diphenylsulfonium triflate as the labelling precursor. [18F]FIB was prepared in high radiochemical yields of 89 ± 10% (decay-corrected, n = 7) within 60 min, including HPLC purification. The radiochemical purity exceeded 95%, and specific activity was greater than 40 GBq/μmol. Typically, from an experiment, 6.4 GBq of [18F]FIB could be obtained starting from 10.4 GBq of [18F]fluoride.

  14. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  15. Automations influence on nuclear power plants: a look at three accidents and how automation played a role.

    Science.gov (United States)

    Schmitt, Kara

    2012-01-01

    Nuclear power is one of the ways that we can design an efficient sustainable future. Automation is the primary system used to assist operators in the task of monitoring and controlling nuclear power plants (NPP). Automation performs tasks such as assessing the status of the plant's operations as well as making real time life critical situational specific decisions. While the advantages and disadvantages of automation are well studied in variety of domains, accidents remind us that there is still vulnerability to unknown variables. This paper will look at the effects of automation within three NPP accidents and incidents and will consider why automation failed in preventing these accidents from occurring. It will also review the accidents at the Three Mile Island, Chernobyl, and Fukushima Daiichi NPP's in order to determine where better use of automation could have resulted in a more desirable outcome.

  16. Automated detection of optical counterparts to GRBs with RAPTOR

    International Nuclear Information System (INIS)

    Wozniak, P. R.; Vestrand, W. T.; Evans, S.; White, R.; Wren, J.

    2006-01-01

    The RAPTOR system (RAPid Telescopes for Optical Response) is an array of several distributed robotic telescopes that automatically respond to GCN localization alerts. Raptor-S is a 0.4-m telescope with 24 arc min. field of view employing a 1k x 1k Marconi CCD detector, and has already detected prompt optical emission from several GRBs within the first minute of the explosion. We present a real-time data analysis and alert system for automated identification of optical transients in Raptor-S GRB response data down to the sensitivity limit of ∼ 19 mag. Our custom data processing pipeline is designed to minimize the time required to reliably identify transients and extract actionable information. The system utilizes a networked PostgreSQL database server for catalog access and distributes email alerts with successful detections

  17. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  18. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  19. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  20. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  1. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  2. Methods for Automated and Continuous Commissioning of Building Systems

    Energy Technology Data Exchange (ETDEWEB)

    Larry Luskay; Michael Brambley; Srinivas Katipamula

    2003-04-30

    Avoidance of poorly installed HVAC systems is best accomplished at the close of construction by having a building and its systems put ''through their paces'' with a well conducted commissioning process. This research project focused on developing key components to enable the development of tools that will automatically detect and correct equipment operating problems, thus providing continuous and automatic commissioning of the HVAC systems throughout the life of a facility. A study of pervasive operating problems reveled the following would most benefit from an automated and continuous commissioning process: (1) faulty economizer operation; (2) malfunctioning sensors; (3) malfunctioning valves and dampers, and (4) access to project design data. Methodologies for detecting system operation faults in these areas were developed and validated in ''bare-bones'' forms within standard software such as spreadsheets, databases, statistical or mathematical packages. Demonstrations included flow diagrams and simplified mock-up applications. Techniques to manage data were demonstrated by illustrating how test forms could be populated with original design information and the recommended sequence of operation for equipment systems. Proposed tools would use measured data, design data, and equipment operating parameters to diagnosis system problems. Steps for future research are suggested to help more toward practical application of automated commissioning and its high potential to improve equipment availability, increase occupant comfort, and extend the life of system equipment.

  3. Future Computer, Communication, Control and Automation

    CERN Document Server

    2011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume topics covered include wireless communications, advances in wireless video, wireless sensors networking, security in wireless networks, network measurement and management, hybrid and discrete-event systems, internet analytics and automation, robotic system and applications, reconfigurable automation systems, machine vision in automation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  4. Introduction matters: Manipulating trust in automation and reliance in automated driving.

    Science.gov (United States)

    Körber, Moritz; Baseler, Eva; Bengler, Klaus

    2018-01-01

    Trust in automation is a key determinant for the adoption of automated systems and their appropriate use. Therefore, it constitutes an essential research area for the introduction of automated vehicles to road traffic. In this study, we investigated the influence of trust promoting (Trust promoted group) and trust lowering (Trust lowered group) introductory information on reported trust, reliance behavior and take-over performance. Forty participants encountered three situations in a 17-min highway drive in a conditionally automated vehicle (SAE Level 3). Situation 1 and Situation 3 were non-critical situations where a take-over was optional. Situation 2 represented a critical situation where a take-over was necessary to avoid a collision. A non-driving-related task (NDRT) was presented between the situations to record the allocation of visual attention. Participants reporting a higher trust level spent less time looking at the road or instrument cluster and more time looking at the NDRT. The manipulation of introductory information resulted in medium differences in reported trust and influenced participants' reliance behavior. Participants of the Trust promoted group looked less at the road or instrument cluster and more at the NDRT. The odds of participants of the Trust promoted group to overrule the automated driving system in the non-critical situations were 3.65 times (Situation 1) to 5 times (Situation 3) higher. In Situation 2, the Trust promoted group's mean take-over time was extended by 1154 ms and the mean minimum time-to-collision was 933 ms shorter. Six participants from the Trust promoted group compared to no participant of the Trust lowered group collided with the obstacle. The results demonstrate that the individual trust level influences how much drivers monitor the environment while performing an NDRT. Introductory information influences this trust level, reliance on an automated driving system, and if a critical take-over situation can be

  5. Twenty Years of Stereotype Threat Research: A Review of Psychological Mediators.

    Science.gov (United States)

    Pennington, Charlotte R; Heim, Derek; Levy, Andrew R; Larkin, Derek T

    2016-01-01

    This systematic literature review appraises critically the mediating variables of stereotype threat. A bibliographic search was conducted across electronic databases between 1995 and 2015. The search identified 45 experiments from 38 articles and 17 unique proposed mediators that were categorized into affective/subjective (n = 6), cognitive (n = 7) and motivational mechanisms (n = 4). Empirical support was accrued for mediators such as anxiety, negative thinking, and mind-wandering, which are suggested to co-opt working memory resources under stereotype threat. Other research points to the assertion that stereotype threatened individuals may be motivated to disconfirm negative stereotypes, which can have a paradoxical effect of hampering performance. However, stereotype threat appears to affect diverse social groups in different ways, with no one mediator providing unequivocal empirical support. Underpinned by the multi-threat framework, the discussion postulates that different forms of stereotype threat may be mediated by distinct mechanisms.

  6. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  7. MR efficiency using automated MRI-desktop eProtocol

    Science.gov (United States)

    Gao, Fei; Xu, Yanzhe; Panda, Anshuman; Zhang, Min; Hanson, James; Su, Congzhe; Wu, Teresa; Pavlicek, William; James, Judy R.

    2017-03-01

    MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort. The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and Interventional Radiology and different vendors for imaging protocol management.

  8. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  9. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  10. GUI test automation for Qt application

    OpenAIRE

    Wang, Lei

    2015-01-01

    GUI test automation is a popular and interesting subject in the testing industry. Many companies plan to start test automation projects in order to implement efficient, less expensive software testing. However, there are challenges for the testing team who lack experience performing GUI tests automation. Many GUI test automation projects have ended in failure due to mistakes made during the early stages of the project. The major work of this thesis is to find a solution to the challenges of e...

  11. Automation synthesis modules review

    International Nuclear Information System (INIS)

    Boschi, S.; Lodi, F.; Malizia, C.; Cicoria, G.; Marengo, M.

    2013-01-01

    The introduction of 68 Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived 68 Ge/ 68 Ga generator has been at the bases of the development of 68 Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for 68 Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. - Highlights: ► Generators availability and robust chemistry boosted for the huge diffusion of 68Ga radiopharmaceuticals. ► Different technological approaches for 68Ga radiopharmaceuticals will be discussed. ► Generator eluate post processing and evolution to cassette based systems were the major issues in automation. ► Impact of regulations on the technological development will be also considered

  12. 76 FR 69755 - National Customs Automation Program Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2011-11-09

    ... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation... announces U.S. Customs and Border Protection's (CBP's) plan to conduct a National Customs Automation Program... conveyance transporting the cargo to the United States. This data will fulfill merchandise entry requirements...

  13. I trust it, but I don't know why: effects of implicit attitudes toward automation on trust in an automated system.

    Science.gov (United States)

    Merritt, Stephanie M; Heimbaugh, Heather; LaChapell, Jennifer; Lee, Deborah

    2013-06-01

    This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.

  14. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  15. Design and implementation of relational databases relevant to the diverse needs of a tuberculosis case contact study in the Gambia.

    Science.gov (United States)

    Jeffries, D J; Donkor, S; Brookes, R H; Fox, A; Hill, P C

    2004-09-01

    The data requirements of a large multidisciplinary tuberculosis case contact study are complex. We describe an ACCESS-based relational database system that meets our rigorous requirements for data entry and validation, while being user-friendly, flexible, exportable, and easy to install on a network or stand alone system. This includes the development of a double data entry package for epidemiology and laboratory data, semi-automated entry of ELISPOT data directly from the plate reader, and a suite of new programmes for the manipulation and integration of flow cytometry data. The double entered epidemiology and immunology databases are combined into a separate database, providing a near-real-time analysis of immuno-epidemiological data, allowing important trends to be identified early and major decisions about the study to be made and acted on. This dynamic data management model is portable and can easily be applied to other studies.

  16. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  17. Robotics/Automated Systems Technicians.

    Science.gov (United States)

    Doty, Charles R.

    Major resources exist that can be used to develop or upgrade programs in community colleges and technical institutes that educate robotics/automated systems technicians. The first category of resources is Economic, Social, and Education Issues. The Office of Technology Assessment (OTA) report, "Automation and the Workplace," presents analyses of…

  18. A fully automated Drosophila olfactory classical conditioning and testing system for behavioral learning and memory assessment.

    Science.gov (United States)

    Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L; Page, Terry L; Bhuva, Bharat; Broadie, Kendal

    2016-03-01

    Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24h) are comparable to traditional manual experiments, while minimizing experimenter involvement. The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Evaluation of an Automated Keywording System.

    Science.gov (United States)

    Malone, Linda C.; And Others

    1990-01-01

    Discussion of automated indexing techniques focuses on ways to statistically document improvements in the development of an automated keywording system over time. The system developed by the Joint Chiefs of Staff to automate the storage, categorization, and retrieval of information from military exercises is explained, and performance measures are…

  20. Future Control and Automation : Proceedings of the 2nd International Conference on Future Control and Automation

    CERN Document Server

    2012-01-01

    This volume Future Control and Automation- Volume 2 includes best papers from 2012 2nd International Conference on Future Control and Automation (ICFCA 2012) held on July 1-2, 2012, Changsha, China. Future control and automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. This volume can be divided into six sessions on the basis of the classification of manuscripts considered, which is listed as follows: Mathematical Modeling, Analysis and Computation, Control Engineering, Reliable Networks Design, Vehicular Communications and Networking, Automation and Mechatronics.

  1. Automation for a base station stability testing

    OpenAIRE

    Punnek, Elvis

    2016-01-01

    This Batchelor’s thesis was commissioned by Oy LM Ericsson Ab Oulu. The aim of it was to help to investigate and create a test automation solution for the stability testing of the LTE base station. The main objective was to create a test automation for a predefined test set. This test automation solution had to be created for specific environments and equipment. This work included creating the automation for the test cases and putting them to daily test automation jobs. The key factor...

  2. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  3. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  4. CmMDb: a versatile database for Cucumis melo microsatellite markers and other horticulture crop research.

    Science.gov (United States)

    Bhawna; Chaduvula, Pavan K; Bonthala, Venkata S; Manjusha, Verma; Siddiq, Ebrahimali A; Polumetla, Ananda K; Prasad, Gajula M N V

    2015-01-01

    Cucumis melo L. that belongs to Cucurbitaceae family ranks among one of the highest valued horticulture crops being cultivated across the globe. Besides its economical and medicinal importance, Cucumis melo L. is a valuable resource and model system for the evolutionary studies of cucurbit family. However, very limited numbers of molecular markers were reported for Cucumis melo L. so far that limits the pace of functional genomic research in melon and other similar horticulture crops. We developed the first whole genome based microsatellite DNA marker database of Cucumis melo L. and comprehensive web resource that aids in variety identification and physical mapping of Cucurbitaceae family. The Cucumis melo L. microsatellite database (CmMDb: http://65.181.125.102/cmmdb2/index.html) encompasses 39,072 SSR markers along with its motif repeat, motif length, motif sequence, marker ID, motif type and chromosomal locations. The database is featured with novel automated primer designing facility to meet the needs of wet lab researchers. CmMDb is a freely available web resource that facilitates the researchers to select the most appropriate markers for marker-assisted selection in melons and to improve breeding strategies.

  5. Considerations and benefits of implementing an online database tool for business continuity.

    Science.gov (United States)

    Mackinnon, Susanne; Pinette, Jennifer

    2016-01-01

    In today's challenging climate of ongoing fiscal restraints, limited resources and complex organisational structures there is an acute need to investigate opportunities to facilitate enhanced delivery of business continuity programmes while maintaining or increasing acceptable levels of service delivery. In 2013, Health Emergency Management British Columbia (HEMBC), responsible for emergency management and business continuity activities across British Columbia's health sector, transitioned its business continuity programme from a manual to automated process with the development of a customised online database, known as the Health Emergency Management Assessment Tool (HEMAT). Key benefits to date include a more efficient business continuity input process, immediate situational awareness for use in emergency response and/or advanced planning and streamlined analyses for generation of reports.

  6. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  7. Coupling computer-interpretable guidelines with a drug-database through a web-based system – The PRESGUID project

    Directory of Open Access Journals (Sweden)

    Fieschi Marius

    2004-03-01

    Full Text Available Abstract Background Clinical Practice Guidelines (CPGs available today are not extensively used due to lack of proper integration into clinical settings, knowledge-related information resources, and lack of decision support at the point of care in a particular clinical context. Objective The PRESGUID project (PREScription and GUIDelines aims to improve the assistance provided by guidelines. The project proposes an online service enabling physicians to consult computerized CPGs linked to drug databases for easier integration into the healthcare process. Methods Computable CPGs are structured as decision trees and coded in XML format. Recommendations related to drug classes are tagged with ATC codes. We use a mapping module to enhance computerized guidelines coupling with a drug database, which contains detailed information about each usable specific medication. In this way, therapeutic recommendations are backed up with current and up-to-date information from the database. Results Two authoritative CPGs, originally diffused as static textual documents, have been implemented to validate the computerization process and to illustrate the usefulness of the resulting automated CPGs and their coupling with a drug database. We discuss the advantages of this approach for practitioners and the implications for both guideline developers and drug database providers. Other CPGs will be implemented and evaluated in real conditions by clinicians working in different health institutions.

  8. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  9. Automation System Products and Research

    OpenAIRE

    Rintala, Mikko; Sormunen, Jussi; Kuisma, Petri; Rahkala, Matti

    2014-01-01

    Automation systems are used in most buildings nowadays. In the past they were mainly used in industry to control and monitor critical systems. During the past few decades the automation systems have become more common and are used today from big industrial solutions to homes of private customers. With the growing need for ecologic and cost-efficient management systems, home and building automation systems are becoming a standard way of controlling lighting, ventilation, heating etc. Auto...

  10. Guidelines for Automation Project Execution

    OpenAIRE

    Takkinen, Heidi

    2011-01-01

    The purpose of this Master’s thesis was to create instructions for executing an automation project. Sarlin Oy Ab needed directions on how to execute an automation project. Sarlin is starting up a new business area offering total project solutions for customers. Sarlin focuses on small and minor automation projects on domestic markets. The thesis represents issues related to project execution starting from the theory of the project to its kick-off and termination. Site work is one importan...

  11. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  12. Improving customs’ border control by creating a reference database of cargo inspection X-ray images

    Directory of Open Access Journals (Sweden)

    Selina Kolokytha

    2017-04-01

    Full Text Available Countries’ effective and efficient border security is becoming increasingly important in today’s global world of economy and trade. To ensure these, customs organisations are responsible for the prevention of illicit goods’ transportation across borders, particularly upon entry. The predicament of the customs administrations will soon be aggravated by both the increase of global trade and the trend towards 100% screening. It is therefore a goal to advance inspection methods to enable successful cargo handling, a scope towards which this research was aimed at. This work was carried out as part of the project ACXIS “Automated Comparison of X-ray Images for cargo Scanning” a European research project within the seventh framework programme answering the call SEC-2012.3.4-1: « Research on Automated Comparison of X-ray Images for cargo Scanning », to improve the process with the largest impact to trade flow: the procedures of freight X-ray scanning. As such, this project was focused on to implementing a manufacturer independent reference database for X-ray images of illicit and non-illicit cargo, developing procedures and algorithms in order to uniform X-ray images of different cargo scanners, and developing a training simulator for inspection officers and a toolbox enclosing several assisted and automated identification techniques for potentially illicit cargo.

  13. Organizational changes and automation: Towards a customer-oriented automation: Part 3

    International Nuclear Information System (INIS)

    Van Gelder, J.W.

    1994-01-01

    Automation offers great opportunities in the efforts of energy utilities in the Netherlands to reorganize towards more customer-oriented businesses. However, automation in itself is not enough. First, the organizational structure has to be changed considerably. Various energy utilities have already started on it. The restructuring principle is the same everywhere, but the way it is implemented differs widely. In this article attention is paid to the necessity of realizing an integrated computerized system, which, however, is not feasible at the moment. The second best alternative is to use various computerized systems, capable of two-way data exchange. Two viable approaches are discussed: (1) one operating system on which all automated systems within a company should run, or (2) a selective system linking on the basis of required speed information exchange. Option (2) offers more freedom of selecting the system. 2 figs

  14. Estimating Regional Mass Balance of Himalayan Glaciers Using Hexagon Imagery: An Automated Approach

    Science.gov (United States)

    Maurer, J. M.; Rupper, S.

    2013-12-01

    Currently there is much uncertainty regarding the present and future state of Himalayan glaciers, which supply meltwater for river systems vital to more than 1.4 billion people living throughout Asia. Previous assessments of regional glacier mass balance in the Himalayas using various remote sensing and field-based methods give inconsistent results, and most assessments are over relatively short (e.g., single decade) timescales. This study aims to quantify multi-decadal changes in volume and extent of Himalayan glaciers through efficient use of the large database of declassified 1970-80s era Hexagon stereo imagery. Automation of the DEM extraction process provides an effective workflow for many images to be processed and glacier elevation changes quantified with minimal user input. The tedious procedure of manual ground control point selection necessary for block-bundle adjustment (as ephemeral data is not available for the declassified images) is automated using the Maximally Stable Extremal Regions algorithm, which matches image elements between raw Hexagon images and georeferenced Landsat 15 meter panchromatic images. Additional automated Hexagon DEM processing, co-registration, and bias correction allow for direct comparison with modern ASTER and SRTM elevation data, thus quantifying glacier elevation and area changes over several decades across largely inaccessible mountainous regions. As consistent methodology is used for all glaciers, results will likely reveal significant spatial and temporal patterns in regional ice mass balance. Ultimately, these findings could have important implications for future water resource management in light of environmental change.

  15. You're a What? Automation Technician

    Science.gov (United States)

    Mullins, John

    2010-01-01

    Many people think of automation as laborsaving technology, but it sure keeps Jim Duffell busy. Defined simply, automation is a technique for making a device run or a process occur with minimal direct human intervention. But the functions and technologies involved in automated manufacturing are complex. Nearly all functions, from orders coming in…

  16. Does Automated Feedback Improve Writing Quality?

    Science.gov (United States)

    Wilson, Joshua; Olinghouse, Natalie G.; Andrada, Gilbert N.

    2014-01-01

    The current study examines data from students in grades 4-8 who participated in a statewide computer-based benchmark writing assessment that featured automated essay scoring and automated feedback. We examined whether the use of automated feedback was associated with gains in writing quality across revisions to an essay, and with transfer effects…

  17. System reliability, performance and trust in adaptable automation.

    Science.gov (United States)

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  18. Automated estimation of defects in magnetographic defectoscopy. 1. Automated magnetographic flow detectors

    International Nuclear Information System (INIS)

    Mikhajlov, S.P.; Vaulin, S.L.; Shcherbinin, V.E.; Shur, M.L.

    1993-01-01

    Consideration is given to specific features and possible functions of equipment for automated estimation of stretched continuity defects for samples with plane surface in magnetographic defectoscopy are discussed. Two models of automated magnetographic flow detectors, those with built-in microcomputer and in the form computer attachment, are described. Directions of further researches and development are discussed. 35 refs., 6 figs

  19. Automation and hypermedia technology applications

    Science.gov (United States)

    Jupin, Joseph H.; Ng, Edward W.; James, Mark L.

    1993-01-01

    This paper represents a progress report on HyLite (Hypermedia Library technology): a research and development activity to produce a versatile system as part of NASA's technology thrusts in automation, information sciences, and communications. HyLite can be used as a system or tool to facilitate the creation and maintenance of large distributed electronic libraries. The contents of such a library may be software components, hardware parts or designs, scientific data sets or databases, configuration management information, etc. Proliferation of computer use has made the diversity and quantity of information too large for any single user to sort, process, and utilize effectively. In response to this information deluge, we have created HyLite to enable the user to process relevant information into a more efficient organization for presentation, retrieval, and readability. To accomplish this end, we have incorporated various AI techniques into the HyLite hypermedia engine to facilitate parameters and properties of the system. The proposed techniques include intelligent searching tools for the libraries, intelligent retrievals, and navigational assistance based on user histories. HyLite itself is based on an earlier project, the Encyclopedia of Software Components (ESC) which used hypermedia to facilitate and encourage software reuse.

  20. Benchmarking of the 2010 BioCreative Challenge III text-mining competition by the BioGRID and MINT interaction databases

    Directory of Open Access Journals (Sweden)

    Cesareni Gianni

    2011-10-01

    Full Text Available Abstract Background The vast amount of data published in the primary biomedical literature represents a challenge for the automated extraction and codification of individual data elements. Biological databases that rely solely on manual extraction by expert curators are unable to comprehensively annotate the information dispersed across the entire biomedical literature. The development of efficient tools based on natural language processing (NLP systems is essential for the selection of relevant publications, identification of data attributes and partially automated annotation. One of the tasks of the Biocreative 2010 Challenge III was devoted to the evaluation of NLP systems developed to identify articles for curation and extraction of protein-protein interaction (PPI data. Results The Biocreative 2010 competition addressed three tasks: gene normalization, article classification and interaction method identification. The BioGRID and MINT protein interaction databases both participated in the generation of the test publication set for gene normalization, annotated the development and test sets for article classification, and curated the test set for interaction method classification. These test datasets served as a gold standard for the evaluation of data extraction algorithms. Conclusion The development of efficient tools for extraction of PPI data is a necessary step to achieve full curation of the biomedical literature. NLP systems can in the first instance facilitate expert curation by refining the list of candidate publications that contain PPI data; more ambitiously, NLP approaches may be able to directly extract relevant information from full-text articles for rapid inspection by expert curators. Close collaboration between biological databases and NLP systems developers will continue to facilitate the long-term objectives of both disciplines.