WorldWideScience

Sample records for composer database software

  1. Gene composer: database software for protein construct design, codon engineering, and gene synthesis.

    Science.gov (United States)

    Lorimer, Don; Raymond, Amy; Walchli, John; Mixon, Mark; Barrow, Adrienne; Wallace, Ellen; Grice, Rena; Burgin, Alex; Stewart, Lance

    2009-04-21

    To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene assembly procedure with mis-match specific endonuclease

  2. Gene Composer: database software for protein construct design, codon engineering, and gene synthesis

    Directory of Open Access Journals (Sweden)

    Mixon Mark

    2009-04-01

    Full Text Available Abstract Background To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. Results An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. Conclusion We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene

  3. Free software and open source databases

    Directory of Open Access Journals (Sweden)

    Napoleon Alexandru SIRITEANU

    2006-01-01

    Full Text Available The emergence of free/open source software -FS/OSS- enterprises seeks to push software development out of the academic stream into the commercial mainstream, and as a result, end-user applications such as open source database management systems (PostgreSQL, MySQL, Firebird are becoming more popular. Companies like Sybase, Oracle, Sun, IBM are increasingly implementing open source strategies and porting programs/applications into the Linux environment. Open source software is redefining the software industry in general and database development in particular.

  4. METODE ANALYTICAL HIERARCHY PROCESS: SISTEM REKOMENDER DATABASE SOFTWARE

    Directory of Open Access Journals (Sweden)

    Doni Purnama Alam Syah

    2014-09-01

    Full Text Available Abstract - Rekomender electoral system is a database software application that can be used to look for alternative software database selection strategy, the method of analytical hierarchy process (AHP. Rekomender systems needed by companies that have a large enough data processing such as the Bureau of Bina Sarana IT Information, expensive investments in the provision of Information Technology (IT makes IT Bina Sarana Information Bureau to be more careful in determining the selection of database software. This study focuses on research of database software selection system with the method of analytical hierarchy process (AHP, a case study of IT Bureau Bina Sarana Infromatika with the observation unit administrator. The results of the study found that there are two (2 main criteria, namely the selection of technology and a user with an alternative strategy My SQL, Oracle and SQL Server. Having tested the system rekomender My SQL result that the top priority in the selection of database software with a 41% weighting, followed by SQL Server and Oracle 39% 21%. The end result of a system that has been created rekomender concluded that the Bureau of Bina Sarana Informatics IT can define strategy alternatives before determining database software to be used more effectively and efficiently. Abstrak¬¬ - Sistem rekomender pemilihan database software merupakan aplikasi yang dapat digunakan untuk mencari alternatif strategi pemilihan database software, dengan metode analytical hierarchy process (AHP. Sistem rekomender dibutuhkan oleh perusahaan yang memiliki pengolahan data yang cukup besar seperti Biro TI Bina Sarana Informatika, mahalnya investasi pada penyediaan Teknologi Informasi (TI membuat Biro TI Bina Sarana Informatika lebih berhati-hati dalam menentukan pemilihan database software. Penelitian ini berfokus kepada penetilian tentang sistem pemilihan database sofware dengan metode analytical hierarchy process (AHP, studi kasus Biro TI Bina Sarana

  5. Gene Composer in a structural genomics environment

    International Nuclear Information System (INIS)

    Lorimer, Don; Raymond, Amy; Mixon, Mark; Burgin, Alex; Staker, Bart; Stewart, Lance

    2011-01-01

    For structural biology applications, protein-construct engineering is guided by comparative sequence analysis and structural information, which allow the researcher to better define domain boundaries for terminal deletions and nonconserved regions for surface mutants. A database software application called Gene Composer has been developed to facilitate construct design. The structural genomics effort at the Seattle Structural Genomics Center for Infectious Disease (SSGCID) requires the manipulation of large numbers of amino-acid sequences and the underlying DNA sequences which are to be cloned into expression vectors. To improve efficiency in high-throughput protein structure determination, a database software package, Gene Composer, has been developed which facilitates the information-rich design of protein constructs and their underlying gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bioinformatics steps used in modern structure-guided protein engineering and synthetic gene engineering. An example of the structure determination of H1N1 RNA-dependent RNA polymerase PB2 subunit is given

  6. Application of database management software to probabilistic risk assessment calculations

    International Nuclear Information System (INIS)

    Wyss, G.D.

    1993-01-01

    Probabilistic risk assessment (PRA) calculations require the management and processing of large amounts of information. This data normally falls into two general categories. For example, a commercial nuclear power plant PRA study makes use of plant blueprints and system schematics, formal plant safety analysis reports, incident reports, letters, memos, handwritten notes from plant visits, and even the analyst's ''engineering judgment''. This information must be documented and cross-referenced in order to properly execute and substantiate the models used in a PRA study. The first category is composed of raw data that is accumulated from equipment testing and operational experiences. These data describe the equipment, its service or testing conditions, its failure mode, and its performance history. The second category is composed of statistical distributions. These distributions can represent probabilities, frequencies, or values of important parameters that are not time-related. Probability and frequency distributions are often obtained by fitting raw data to an appropriate statistical distribution. Database management software is used to store both types of data so that it can be readily queried, manipulated, and archived. This paper provides an overview of the information models used for storing PRA data and illustrates the implementation of these models using examples from current PRA software packages

  7. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  8. Mining Bug Databases for Unidentified Software Vulnerabilities

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Jason Wright; Miles McQueen

    2012-06-01

    Identifying software vulnerabilities is becoming more important as critical and sensitive systems increasingly rely on complex software systems. It has been suggested in previous work that some bugs are only identified as vulnerabilities long after the bug has been made public. These vulnerabilities are known as hidden impact vulnerabilities. This paper discusses the feasibility and necessity to mine common publicly available bug databases for vulnerabilities that are yet to be identified. We present bug database analysis of two well known and frequently used software packages, namely Linux kernel and MySQL. It is shown that for both Linux and MySQL, a significant portion of vulnerabilities that were discovered for the time period from January 2006 to April 2011 were hidden impact vulnerabilities. It is also shown that the percentage of hidden impact vulnerabilities has increased in the last two years, for both software packages. We then propose an improved hidden impact vulnerability identification methodology based on text mining bug databases, and conclude by discussing a few potential problems faced by such a classifier.

  9. Software for radioactive wastes database

    International Nuclear Information System (INIS)

    Souza, Eros Viggiano de; Reis, Luiz Carlos Alves

    1996-01-01

    A radioactive waste database was implemented at CDTN in 1991. The objectives are to register and retrieve information about wastes ge in 1991. The objectives are to register and retrieve information about wastes generated and received at the Centre in order to improve the waste management. Since 1995, the database has being reviewed and a software has being developed aiming at processing information in graphical environment (Windows 95 and Windows NT), minimising the possibility of errors and making the users access more friendly. It was also envisaged to ease graphics and reports edition and to make this database available to other CNEN institutes and even to external organizations. (author)

  10. AgdbNet – antigen sequence database software for bacterial typing

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2006-06-01

    Full Text Available Abstract Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated.

  11. MICA: desktop software for comprehensive searching of DNA databases

    Directory of Open Access Journals (Sweden)

    Glick Benjamin S

    2006-10-01

    Full Text Available Abstract Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software.

  12. Software Engineering Laboratory (SEL) database organization and user's guide

    Science.gov (United States)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  13. High-energy physics software parallelization using database techniques

    International Nuclear Information System (INIS)

    Argante, E.; Van der Stok, P.D.V.; Willers, I.

    1997-01-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradigm, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI. (orig.)

  14. Database Software Selection for the Egyptian National STI Network.

    Science.gov (United States)

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  15. Use of Software Tools in Teaching Relational Database Design.

    Science.gov (United States)

    McIntyre, D. R.; And Others

    1995-01-01

    Discusses the use of state-of-the-art software tools in teaching a graduate, advanced, relational database design course. Results indicated a positive student response to the prototype of expert systems software and a willingness to utilize this new technology both in their studies and in future work applications. (JKP)

  16. A role for relational databases in high energy physics software systems

    International Nuclear Information System (INIS)

    Lauer, R.; Slaughter, A.J.; Wolin, E.

    1987-01-01

    This paper presents the design and initial implementation of software which uses a relational database management system for storage and retrieval of real and Monte Carlo generated events from a charm and beauty spectrometer with a vertex detector. The purpose of the software is to graphically display and interactively manipulate the events, fit tracks and vertices and calculate physics quantities. The INGRES database forms the core of the system, while the DI3000 graphics package is used to plot the events. The paper introduces relational database concepts and their applicability to high energy physics data. It also evaluates the environment provided by INGRES, particularly its usefulness in code development and its Fortran interface. Specifics of the database design we have chosen are detailed as well. (orig.)

  17. Quality control in diagnostic radiology: software (Visual Basic 6) and database applications

    International Nuclear Information System (INIS)

    Md Saion Salikin; Muhammad Farid Abdul Khalid

    2002-01-01

    Quality Assurance programme in diagnostic Radiology is being implemented by the Ministry of Health (MoH) in Malaysia. Under this program the performance of an x-ray machine used for diagnostic purpose is tested by using the approved procedure which is commonly known as Quality Control in diagnostic radiology. The quality control or performance tests are carried out b a class H licence holder issued the Atomic Energy Licensing Act 1984. There are a few computer applications (software) that are available in the market which can be used for this purpose. A computer application (software) using Visual Basics 6 and Microsoft Access, is being developed to expedite data handling, analysis and storage as well as report writing of the quality control tests. In this paper important features of the software for quality control tests are explained in brief. A simple database is being established for this purpose which is linked to the software. Problems encountered in the preparation of database are discussed in this paper. A few examples of practical usage of the software and database applications are presented in brief. (Author)

  18. CSE database: extended annotations and new recommendations for ECG software testing.

    Science.gov (United States)

    Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie

    2017-08-01

    Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow

  19. Thermodynamic and volumetric databases and software for magnesium alloys

    Science.gov (United States)

    Kang, Youn-Bae; Aliravci, Celil; Spencer, Philip J.; Eriksson, Gunnar; Fuerst, Carlton D.; Chartrand, Patrice; Pelton, Arthur D.

    2009-05-01

    Extensive databases for the thermodynamic and volumetric properties of magnesium alloys have been prepared by critical evaluation, modeling, and optimization of available data. Software has been developed to access the databases to calculate equilibrium phase diagrams, heat effects, etc., and to follow the course of equilibrium or Scheil-Gulliver cooling, calculating not only the amounts of the individual phases, but also of the microstructural constituents.

  20. Data collection procedures for the Software Engineering Laboratory (SEL) database

    Science.gov (United States)

    Heller, Gerard; Valett, Jon; Wild, Mary

    1992-01-01

    This document is a guidebook to collecting software engineering data on software development and maintenance efforts, as practiced in the Software Engineering Laboratory (SEL). It supersedes the document entitled Data Collection Procedures for the Rehosted SEL Database, number SEL-87-008 in the SEL series, which was published in October 1987. It presents procedures to be followed on software development and maintenance projects in the Flight Dynamics Division (FDD) of Goddard Space Flight Center (GSFC) for collecting data in support of SEL software engineering research activities. These procedures include detailed instructions for the completion and submission of SEL data collection forms.

  1. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    Science.gov (United States)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  2. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  3. Concierge: Personal database software for managing digital research resources

    Directory of Open Access Journals (Sweden)

    Hiroyuki Sakai

    2007-11-01

    Full Text Available This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literaturemanagement, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp.

  4. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    Science.gov (United States)

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  5. Software configuration management plan for the Hanford site technical database

    International Nuclear Information System (INIS)

    GRAVES, N.J.

    1999-01-01

    The Hanford Site Technical Database (HSTD) is used as the repository/source for the technical requirements baseline and programmatic data input via the Hanford Site and major Hanford Project Systems Engineering (SE) activities. The Hanford Site SE effort has created an integrated technical baseline for the Hanford Site that supports SE processes at the Site and project levels which is captured in the HSTD. The HSTD has been implemented in Ascent Logic Corporation (ALC) Commercial Off-The-Shelf (COTS) package referred to as the Requirements Driven Design (RDD) software. This Software Configuration Management Plan (SCMP) provides a process and means to control and manage software upgrades to the HSTD system

  6. Composing simulations using persistent software components

    Energy Technology Data Exchange (ETDEWEB)

    Holland, J.V.; Michelsen, R.E.; Powell, D.R.; Upton, S.C.; Thompson, D.R.

    1999-03-01

    The traditional process for developing large-scale simulations is cumbersome, time consuming, costly, and in some cases, inadequate. The topics of software components and component-based software engineering are being explored by software professionals in academic and industrial settings. A component is a well-delineated, relatively independent, and replaceable part of a software system that performs a specific function. Many researchers have addressed the potential to derive a component-based approach to simulations in general, and a few have focused on military simulations in particular. In a component-based approach, functional or logical blocks of the simulation entities are represented as coherent collections of components satisfying explicitly defined interface requirements. A simulation is a top-level aggregate comprised of a collection of components that interact with each other in the context of a simulated environment. A component may represent a simulation artifact, an agent, or any entity that can generated events affecting itself, other simulated entities, or the state of the system. The component-based approach promotes code reuse, contributes to reducing time spent validating or verifying models, and promises to reduce the cost of development while still delivering tailored simulations specific to analysis questions. The Integrated Virtual Environment for Simulation (IVES) is a composition-centered framework to achieve this potential. IVES is a Java implementation of simulation composition concepts developed at Los Alamos National Laboratory for use in several application domains. In this paper, its use in the military domain is demonstrated via the simulation of dismounted infantry in an urban environment.

  7. Development of the software for the component reliability database system of Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Kim, Seung Hwan; Choi, Sun Young [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    A study was performed to develop the system for the component reliability database which consists of database system to store the reliability data and softwares to analyze the reliability data.This system is a part of KIND (Korea Information System for Nuclear Reliability Database).The MS-SQL database is used to stores the component population data, component maintenance history, and the results of reliability analysis. Two softwares were developed for the component reliability system. One is the KIND-InfoView for the data storing, retrieving and searching. The other is the KIND-CompRel for the statistical analysis of component reliability. 4 refs., 13 figs., 7 tabs. (Author)

  8. Models for composing software : an analysis of software composition and objects

    NARCIS (Netherlands)

    Bergmans, Lodewijk

    1999-01-01

    In this report, we investigate component-based software construction with a focus on composition. In particular we try to analyze the requirements and issues for components and software composition. As a means to understand this research area, we introduce a canonical model for representing

  9. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  10. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  11. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  12. A Survey of Bioinformatics Database and Software Usage through Mining the Literature.

    Directory of Open Access Journals (Sweden)

    Geraint Duck

    Full Text Available Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT, though some are instead seeing rapid growth (e.g., the GO, R. We find a striking imbalance in resource usage with the top 5% of resource names (133 names accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.

  13. Component Composability Issues in Object-Oriented Programming

    NARCIS (Netherlands)

    Aksit, Mehmet; Tekinerdogan, B.

    1997-01-01

    Building software from reusable components is considered important in reducing development costs. Object-oriented languages such as C++, Smalltalk and Java, however, are not capable of expressing certain aspects of applications in a composable way. Software engineers may experience difficulties in

  14. Management Guidelines for Database Developers' Teams in Software Development Projects

    Science.gov (United States)

    Rusu, Lazar; Lin, Yifeng; Hodosi, Georg

    Worldwide job market for database developers (DBDs) is continually increasing in last several years. In some companies, DBDs are organized as a special team (DBDs team) to support other projects and roles. As a new role, the DBDs team is facing a major problem that there are not any management guidelines for them. The team manager does not know which kinds of tasks should be assigned to this team and what practices should be used during DBDs work. Therefore in this paper we have developed a set of management guidelines, which includes 8 fundamental tasks and 17 practices from software development process, by using two methodologies Capability Maturity Model (CMM) and agile software development in particular Scrum in order to improve the DBDs team work. Moreover the management guidelines developed here has been complemented with practices from authors' experience in this area and has been evaluated in the case of a software company. The management guidelines for DBD teams presented in this paper could be very usefully for other companies too that are using a DBDs team and could contribute towards an increase of the efficiency of these teams in their work on software development projects.

  15. Stories Matter: Conceptual Challenges in the Development of Oral History Database Building Software

    Directory of Open Access Journals (Sweden)

    Erin Jessee

    2010-11-01

    Full Text Available Stories Matter is new oral history database building software designed by an interdisciplinary team of oral historians and a software engineer affiliated with the Centre for Oral History and Digital Storytelling at Concordia University in Montreal, Quebec, Canada. It encourages a shift away from transcription, enabling oral historians to continue to interact with their interviews in an efficient manner without compromising the greater life history context of their interviewees. This article addresses some of the conceptual challenges that arose when developing this software. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs110119

  16. Composing Interactive Dance Pieces for the MotionComposer, a device for Persons with Disabilities

    OpenAIRE

    Bergsland, Andreas; Wechsler, Robert

    2015-01-01

    The authors have developed a new hardware/software device for persons with disabilities (the MotionComposer), and in the process created a number of interactive dance pieces for non- disabled professional dancers. The paper briefly describes the hardware and motion tracking software of the device before going into more detail concerning the mapping strategies and sound design applied to three interactive dance pieces. The paper concludes by discussing a particular philosophy championing trans...

  17. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  18. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    Science.gov (United States)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  19. A Relational Database Model for Managing Accelerator Control System Software at Jefferson Lab

    International Nuclear Information System (INIS)

    Sally Schaffner; Theodore Larrieu

    2001-01-01

    The operations software group at the Thomas Jefferson National Accelerator Facility faces a number of challenges common to facilities which manage a large body of software developed in-house. Developers include members of the software group, operators, hardware engineers and accelerator physicists.One management problem has been ensuring that all software has an identified owner who is still working at the lab. In some cases, locating source code for ''orphaned'' software has also proven to be difficult. Other challenges include ensuring that working versions of all operational software are available, testing changes to operational software without impacting operations, upgrading infrastructure software (OS, compilers, interpreters, commercial packages, share/freeware, etc), ensuring that appropriate documentation is available and up to date, underutilization of code reuse, input/output file management,and determining what other software will break if a software package is upgraded. This paper will describe a relational database model which has been developed to track this type of information and make it available to managers and developers.The model also provides a foundation for developing productivity-enhancing tools for automated building, versioning, and installation of software. This work was supported by the U.S. DOE contract No. DE-AC05-84ER40150

  20. Moving-Map Composer Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Develops, tests, and transitions software and algorithms to perform database design, data compression, change detection, data fusion, archival, retrieval,...

  1. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  2. Software and Database Usage on Metabolomic Studies: Using XCMS on LC-MS Data Analysis

    Directory of Open Access Journals (Sweden)

    Mustafa Celebier

    2014-04-01

    Full Text Available Metabolome is the complete set of small-molecule metabolites to be found in a cell or a single organism. Metabolomics is the scientific study to determine and identify the chemicals in metabolome with advanced analytical techniques. Nowadays, the elucidation of the molecular mechanism of any disease with genome analysis and proteome analysis is not sufficient. Instead of these, a holistic assessment including metabolomic studies provides rational and accurate results. Metabolite levels in an organism are associated with the cellular functions. Thus, determination of the metabolite amounts identifies the phenotype of a cell or tissue related with the genetic and some other variations. Even though, the analysis of metabolites for medical diagnosis and therapy have been performed for a long time, the studies to improve the analysis methods for metabolite profiling are recently increased. The application of metabolomics includes the identification of biomarkers, enzyme-substract interactions, drug-activity studies, metabolic pathway analysis and some other studies related with the system biology. The preprocessing and computing of the data obtained from LC-MS, GC-MS, CE-MS and NMR for metabolite profiling are helpful for preventing from time consuming manual data analysis processes and possible random errors on profiling period. In addition, such preprocesses allow us to identify low amount of metabolites which are not possible to be analyzed by manual processing. Therefore, the usage of software and databases for this purpose could not be ignored. In this study, it is briefly presented the software and database used on metabolomics and it is evaluated the capability of these software on metabolite profiling. Particularly, the performance of one of the most popular software called XCMS on the evaluation of LC-MS results for metabolomics was overviewed. In the near future, metabolomics with software and database support is estimated to be a routine

  3. Development and Demonstration of Material Properties Database and Software for the Simulation of Flow Properties in Cementitious Materials

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-03-30

    This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to develop a new Cementitious Barriers Project (CBP) software module designated as FLOExcel. FLOExcel incorporates a uniform database to capture material characterization data and a GoldSim model to define flow properties for both intact and fractured cementitious materials and estimate Darcy velocity based on specified hydraulic head gradient and matric tension. The software module includes hydraulic parameters for intact cementitious and granular materials in the database and a standalone GoldSim framework to manipulate the data. The database will be updated with new data as it comes available. The software module will later be integrated into the next release of the CBP Toolbox, Version 3.0. This report documents the development efforts for this software module. The FY14 activities described in this report focused on the following two items that form the FLOExcel package; 1) Development of a uniform database to capture CBP data for cementitious materials. In particular, the inclusion and use of hydraulic properties of the materials are emphasized; and 2) Development of algorithms and a GoldSim User Interface to calculate hydraulic flow properties of degraded and fractured cementitious materials. Hydraulic properties are required in a simulation of flow through cementitious materials such as Saltstone, waste tank fill grout, and concrete barriers. At SRNL these simulations have been performed using the PORFLOW code as part of Performance Assessments for salt waste disposal and waste tank closure.

  4. Metadata database and data analysis software for the ground-based upper atmospheric data developed by the IUGONET project

    Science.gov (United States)

    Hayashi, H.; Tanaka, Y.; Hori, T.; Koyama, Y.; Shinbori, A.; Abe, S.; Kagitani, M.; Kouno, T.; Yoshida, D.; Ueno, S.; Kaneda, N.; Yoneda, M.; Tadokoro, H.; Motoba, T.; Umemura, N.; Iugonet Project Team

    2011-12-01

    The Inter-university Upper atmosphere Global Observation NETwork (IUGONET) is a Japanese inter-university project by the National Institute of Polar Research (NIPR), Tohoku University, Nagoya University, Kyoto University, and Kyushu University to build a database of metadata for ground-based observations of the upper atmosphere. The IUGONET institutes/universities have been collecting various types of data by radars, magnetometers, photometers, radio telescopes, helioscopes, etc. at various locations all over the world and at various altitude layers from the Earth's surface to the Sun. The metadata database will be of great help to researchers in efficiently finding and obtaining these observational data spread over the institutes/universities. This should also facilitate synthetic analysis of multi-disciplinary data, which will lead to new types of research in the upper atmosphere. The project has also been developing a software to help researchers download, visualize, and analyze the data provided from the IUGONET institutes/universities. The metadata database system is built on the platform of DSpace, which is an open source software for digital repositories. The data analysis software is written in the IDL language with the TDAS (THEMIS Data Analysis Software suite) library. These products have been just released for beta-testing.

  5. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    Science.gov (United States)

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  6. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    Science.gov (United States)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  7. Discussions about acceptance of the free software for management and creation of referencial database for papers

    Directory of Open Access Journals (Sweden)

    Flavio Ribeiro Córdula

    2016-03-01

    Full Text Available Objective. This research aimed to determine the degree of acceptance, by the use of the Technology Acceptance Model - TAM, of the developed software, which allows the construction and database management of scientific articles aimed at assisting in the dissemination and retrieval of stored scientific production in digital media. Method. The research is characterized as quantitative, since the TAM, which guided this study is essentially quantitative. A questionnaire developed according to TAM guidelines was used as a tool for data collection. Results. It was possible to verify that this software, despite the need of fixes and improvements inherent to this type of tool, obtained a relevant degree of acceptance by the sample studied. Conciderations. It also should be noted that although this research has been directed to scholars in the field of information science, the idea that justified the creation of the software used in this study might contribute to the development of science in any field of knowledge, aiming at the optimization results of a search conducted in a specialized database can provide.

  8. Composable Framework Support for Software-FMEA Through Model Execution

    Science.gov (United States)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  9. A software-based technique enabling composable hierarchical preemptive scheduling for time-triggered applications

    NARCIS (Netherlands)

    Nejad, A.B.; Molnos, A.; Goossens, K.G.W.

    2013-01-01

    Many embedded real-time applications are typically time-triggered and preemptive schedulers are used to execute tasks of such applications. Orthogonally, composable partitioned embedded platforms use preemptive time-division multiplexing mechanism to isolate applications. Existing composable systems

  10. The Database and Data Analysis Software of Radiation Monitoring System

    International Nuclear Information System (INIS)

    Wang Weizhen; Li Jianmin; Wang Xiaobing; Hua Zhengdong; Xu Xunjiang

    2009-01-01

    Shanghai Synchrotron Radiation Facility (SSRF for short) is a third-generation light source building in China, including a 150MeV injector, 3.5GeV booster, 3.5GeV storage ring and an amount of beam line stations. The data is fetched by the monitoring computer from collecting modules in the front end, and saved in the MySQL database in the managing computer. The data analysis software is coded with Python, a script language, to inquire, summarize and plot the data of a certain monitoring channel during a certain period and export to an external file. In addition, the warning event can be inquired separately. The website for historical and real-time data inquiry and plotting is coded with PHP. (authors)

  11. New tools and methods for direct programmatic access to the dbSNP relational database.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.

  12. Composing, Analyzing and Validating Software Models

    Science.gov (United States)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  13. Establishment of database system for management of KAERI wastes

    International Nuclear Information System (INIS)

    Shon, J. S.; Kim, K. J.; Ahn, S. J.

    2004-07-01

    Radioactive wastes generated by KAERI has various types, nuclides and characteristics. To manage and control these kinds of radioactive wastes, it comes to need systematic management of their records, efficient research and quick statistics. Getting information about radioactive waste generated and stored by KAERI is the basic factor to construct the rapid information system for national cooperation management of radioactive waste. In this study, Radioactive Waste Management Integration System (RAWMIS) was developed. It is is aimed at management of record of radioactive wastes, uplifting the efficiency of management and support WACID(Waste Comprehensive Integration Database System) which is a national radioactive waste integrated safety management system of Korea. The major information of RAWMIS supported by user's requirements is generation, gathering, transfer, treatment, and storage information for solid waste, liquid waste, gas waste and waste related to spent fuel. RAWMIS is composed of database, software (interface between user and database), and software for a manager and it was designed with Client/Server structure. RAWMIS will be a useful tool to analyze radioactive waste management and radiation safety management. Also, this system is developed to share information with associated companies. Moreover, it can be expected to support the technology of research and development for radioactive waste treatment

  14. PmagPy: Software Package for Paleomagnetic Data Analysis and Gateway to the Magnetics Information Consortium (MagIC) Database

    Science.gov (United States)

    Jonestrask, L.; Tauxe, L.; Shaar, R.; Jarboe, N.; Minnett, R.; Koppers, A. A. P.

    2014-12-01

    There are many data types and methods of analysis in rock and paleomagnetic investigations. The MagIC database (http://earthref.org/MAGIC) was designed to accommodate the vast majority of data used in such investigations. Yet getting data from the laboratory into the database, and visualizing and re-analyzing data downloaded from the database, makes special demands on data formatting. There are several recently published programming packages that deal with single types of data: demagnetization experiments (e.g., Lurcock et al., 2012), paleointensity experiments (e.g., Leonhardt et al., 2004), and FORC diagrams (e.g., Harrison et al., 2008). However, there is a need for a unified set of open source, cross-platform software that deals with the great variety of data types in a consistent way and facilitates importing data into the MagIC format, analyzing them and uploading them into the MagIC database. The PmagPy software package (http://earthref.org/PmagPy/cookbook/) comprises a such a comprehensive set of tools. It facilitates conversion of many laboratory formats into the common MagIC format and allows interpretation of demagnetization and Thellier-type experimental data. With some 175 programs and over 250 functions, it can be used to create a wide variety of plots and allows manipulation of downloaded data sets as well as preparation of new contributions for uploading to the MagIC database.

  15. Combined protein construct and synthetic gene engineering for heterologous protein expression and crystallization using Gene Composer

    Directory of Open Access Journals (Sweden)

    Walchli John

    2009-04-01

    Full Text Available Abstract Background With the goal of improving yield and success rates of heterologous protein production for structural studies we have developed the database and algorithm software package Gene Composer. This freely available electronic tool facilitates the information-rich design of protein constructs and their engineered synthetic gene sequences, as detailed in the accompanying manuscript. Results In this report, we compare heterologous protein expression levels from native sequences to that of codon engineered synthetic gene constructs designed by Gene Composer. A test set of proteins including a human kinase (P38α, viral polymerase (HCV NS5B, and bacterial structural protein (FtsZ were expressed in both E. coli and a cell-free wheat germ translation system. We also compare the protein expression levels in E. coli for a set of 11 different proteins with greatly varied G:C content and codon bias. Conclusion The results consistently demonstrate that protein yields from codon engineered Gene Composer designs are as good as or better than those achieved from the synonymous native genes. Moreover, structure guided N- and C-terminal deletion constructs designed with the aid of Gene Composer can lead to greater success in gene to structure work as exemplified by the X-ray crystallographic structure determination of FtsZ from Bacillus subtilis. These results validate the Gene Composer algorithms, and suggest that using a combination of synthetic gene and protein construct engineering tools can improve the economics of gene to structure research.

  16. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  17. Monitoring caustic injuries from emergency department databases using automatic keyword recognition software.

    Science.gov (United States)

    Vignally, P; Fondi, G; Taggi, F; Pitidis, A

    2011-03-31

    In Italy the European Union Injury Database reports the involvement of chemical products in 0.9% of home and leisure accidents. The Emergency Department registry on domestic accidents in Italy and the Poison Control Centres record that 90% of cases of exposure to toxic substances occur in the home. It is not rare for the effects of chemical agents to be observed in hospitals, with a high potential risk of damage - the rate of this cause of hospital admission is double the domestic injury average. The aim of this study was to monitor the effects of injuries caused by caustic agents in Italy using automatic free-text recognition in Emergency Department medical databases. We created a Stata software program to automatically identify caustic or corrosive injury cases using an agent-specific list of keywords. We focused attention on the procedure's sensitivity and specificity. Ten hospitals in six regions of Italy participated in the study. The program identified 112 cases of injury by caustic or corrosive agents. Checking the cases by quality controls (based on manual reading of ED reports), we assessed 99 cases as true positive, i.e. 88.4% of the patients were automatically recognized by the software as being affected by caustic substances (99% CI: 80.6%- 96.2%), that is to say 0.59% (99% CI: 0.45%-0.76%) of the whole sample of home injuries, a value almost three times as high as that expected (p < 0.0001) from European codified information. False positives were 11.6% of the recognized cases (99% CI: 5.1%- 21.5%). Our automatic procedure for caustic agent identification proved to have excellent product recognition capacity with an acceptable level of excess sensitivity. Contrary to our a priori hypothesis, the automatic recognition system provided a level of identification of agents possessing caustic effects that was significantly much greater than was predictable on the basis of the values from current codifications reported in the European Database.

  18. An Approach for Composing Services Based on Environment Ontology

    Directory of Open Access Journals (Sweden)

    Guangjun Cai

    2013-01-01

    Full Text Available Service-oriented computing is revolutionizing the modern computing paradigms with its aim to boost software reuse and enable business agility. Under this paradigm, new services are fabricated by composing available services. The problem arises as how to effectively and efficiently compose heterogeneous services facing the high complexity of service composition. Based on environment ontology, this paper introduces a requirement-driven service composition approach. We propose the algorithms to decompose the requirement, the rules to deduct the relation between services, and the algorithm for composing service. The empirical results and the comparison with other services’ composition methodologies show that this approach is feasible and efficient.

  19. Database Software for the 1990s.

    Science.gov (United States)

    Beiser, Karl

    1990-01-01

    Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and…

  20. Software Classifications: Trends in Literacy Software Publication and Marketing.

    Science.gov (United States)

    Balajthy, Ernest

    First in a continuing series of reports on trends in marketing and publication of software for literacy education, a study explored the development of a database to track the trends and reported on trends seen in 1995. The final version of the 1995 database consisted of 1011 software titles, 165 of which had been published in 1995 and 846…

  1. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  2. A DICOM based radiotherapy plan database for research collaboration and reporting

    International Nuclear Information System (INIS)

    Westberg, J; Krogh, S; Brink, C; Vogelius, I R

    2014-01-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  3. A DICOM based radiotherapy plan database for research collaboration and reporting

    Science.gov (United States)

    Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.

    2014-03-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  4. Fuel element database: developer handbook

    International Nuclear Information System (INIS)

    Dragicevic, M.

    2004-09-01

    The fuel elements database which was developed for Atomic Institute of the Austrian Universities is described. The software uses standards like HTML, PHP and SQL. For the standard installation freely available software packages such as MySQL database or the PHP interpreter from Apache Software Foundation and Java Script were used. (nevyjel)

  5. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  6. A Mathematics Software Database Update.

    Science.gov (United States)

    Cunningham, R. S.; Smith, David A.

    1987-01-01

    Contains an update of an earlier listing of software for mathematics instruction at the college level. Topics are: advanced mathematics, algebra, calculus, differential equations, discrete mathematics, equation solving, general mathematics, geometry, linear and matrix algebra, logic, statistics and probability, and trigonometry. (PK)

  7. Software for mass spectrometer control

    International Nuclear Information System (INIS)

    Curuia, Marian; Culcer, Mihai; Anghel, Mihai; Iliescu, Mariana; Trancota, Dan; Kaucsar, Martin; Oprea, Cristiana

    2004-01-01

    The paper describes a software application for the MAT 250 mass spectrometer control, which was refurbished. The spectrometer was bring-up-to-date using a hardware structure on its support where the software application for mass spectrometer control was developed . The software application is composed of dedicated modules that perform given operations. The instructions that these modules have to perform are generated by a principal module. This module makes possible the change of information between the modules that compose the software application. The use of a modal structure is useful for adding new functions in the future. The developed application in our institute made possible the transformation of the mass spectrometer MAT 250 into a device endowed with other new generation tools. (authors)

  8. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform

    Directory of Open Access Journals (Sweden)

    List Markus

    2017-06-01

    Full Text Available Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  9. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.

    Science.gov (United States)

    List, Markus

    2017-06-10

    Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  10. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  11. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  12. A-state- of-the-art report on construction of a bibliographic information database for the ETDE

    International Nuclear Information System (INIS)

    Oh, Jeong Hoon; Kim, Tae Whan; Yi, Ji Ho; Choi, Kwang; Chun, Young Chun; Yoo, Jae Bok; Yoo, An Na

    2001-11-01

    This report describes A State-of-the-Art Report on Construction of a Bibliographic Information Database for the ETDE(Energy Technology Data Excahange). In the energy technology, the data was selected materials, taken necessary information from it. The analyized materials was inputted according to input rule of ETDE format and verified input software (WinFIBRE). With the help of the National Energy information Consortium composed of 8 organizations, domestic information have been gathered and submitted to ETDE/OA for the exchange of bibliographic information. ETDE database is expected to help energy technology information users for their R and D and improvement of energy efficiency through ETIS system

  13. Software configuration management

    CERN Document Server

    Keyes, Jessica

    2004-01-01

    Software Configuration Management discusses the framework from a standards viewpoint, using the original DoD MIL-STD-973 and EIA-649 standards to describe the elements of configuration management within a software engineering perspective. Divided into two parts, the first section is composed of 14 chapters that explain every facet of configuration management related to software engineering. The second section consists of 25 appendices that contain many valuable real world CM templates.

  14. PARTOS - Passive and Active Ray TOmography Software: description and preliminary analysis using TOMO-ETNA experiment’s dataset

    Directory of Open Access Journals (Sweden)

    Alejandro Díaz-Moreno

    2016-09-01

    Full Text Available In this manuscript we present the new friendly seismic tomography software based on joint inversion of active and passive seismic sources called PARTOS (Passive Active Ray TOmography Software. This code has been developed on the base of two well-known widely used tomographic algorithms (LOTOS and ATOM-3D, providing a robust set of algorithms. The dataset used to set and test the program has been provided by TOMO-ETNA experiment. TOMO-ETNA database is a large, high-quality dataset that includes active and passive seismic sources recorded during a period of 4 months in 2014. We performed a series of synthetic tests in order to estimate the resolution and robustness of the solutions. Real data inversion has been carried out using 3 different subsets: i active data; ii passive data; and iii joint dataset. Active database is composed by a total of 16,950 air-gun shots during 1 month and passive database includes 452 local and regional earthquakes recorded during 4 months. This large dataset provides a high ray density within the study region. The combination of active and passive seismic data, together with the high quality of the database, permits to obtain a new tomographic approach of the region under study never done before. An additional user-guide of PARTOS software is provided in order to facilitate the implementation for new users.

  15. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  16. Database for propagation models

    Science.gov (United States)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  17. Validation of SmartRank: A likelihood ratio software for searching national DNA databases with complex DNA profiles.

    Science.gov (United States)

    Benschop, Corina C G; van de Merwe, Linda; de Jong, Jeroen; Vanvooren, Vanessa; Kempenaers, Morgane; Kees van der Beek, C P; Barni, Filippo; Reyes, Eusebio López; Moulin, Léa; Pene, Laurent; Haned, Hinda; Sijen, Titia

    2017-07-01

    Searching a national DNA database with complex and incomplete profiles usually yields very large numbers of possible matches that can present many candidate suspects to be further investigated by the forensic scientist and/or police. Current practice in most forensic laboratories consists of ordering these 'hits' based on the number of matching alleles with the searched profile. Thus, candidate profiles that share the same number of matching alleles are not differentiated and due to the lack of other ranking criteria for the candidate list it may be difficult to discern a true match from the false positives or notice that all candidates are in fact false positives. SmartRank was developed to put forward only relevant candidates and rank them accordingly. The SmartRank software computes a likelihood ratio (LR) for the searched profile and each profile in the DNA database and ranks database entries above a defined LR threshold according to the calculated LR. In this study, we examined for mixed DNA profiles of variable complexity whether the true donors are retrieved, what the number of false positives above an LR threshold is and the ranking position of the true donors. Using 343 mixed DNA profiles over 750 SmartRank searches were performed. In addition, the performance of SmartRank and CODIS were compared regarding DNA database searches and SmartRank was found complementary to CODIS. We also describe the applicable domain of SmartRank and provide guidelines. The SmartRank software is open-source and freely available. Using the best practice guidelines, SmartRank enables obtaining investigative leads in criminal cases lacking a suspect. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. EMMA: a new paradigm in configurable software

    International Nuclear Information System (INIS)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-01-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  19. EMMA: a new paradigm in configurable software

    Science.gov (United States)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-10-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  20. GaMeTix – new software for management of MCQ databases

    Directory of Open Access Journals (Sweden)

    Dimitrolos Krajčí

    2015-12-01

    Full Text Available We have developed new software named GaMeTix for management of large collections of examination questions written in a variety of MCQ (Multiple Choice Question formats. This application provides a wide scale of functionality modes like collecting and editing sets of questions, generating electronic versions of examination tests, printing examination paper sheets and exporting sets of questions in a plain text document for hard copy archiving or transfer to specific electronic testing applications. The content of the database is searchable according to several criteria using sets of filters that characterize each question. Collections of MC questions can be divided or merged together according to results of the filtering function. Examination questions can be complemented with pictures or diagrams in .jpg format. GaMeTix is a portable, freeware application that runs on MS Windows operating systems.

  1. A service based component model for composing and exploring MPSoC platforms

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan

    2008-01-01

    This paper presents an abstract service based modelling method for use in performance estimation and design space exploration of Multi Processor System On Chip (MPSoC) based systems. The method provides the infrastructure for composing abstract hardware and software models of stream based systems...... which can be used to produce detailed quantitative information regarding runtime properties of a given system through simulations. The method is based on a service oriented model of computation which is a modified version of Hierarchical Coloured Petri Nets.......This paper presents an abstract service based modelling method for use in performance estimation and design space exploration of Multi Processor System On Chip (MPSoC) based systems. The method provides the infrastructure for composing abstract hardware and software models of stream based systems...

  2. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  3. eSoftwareList

    Data.gov (United States)

    US Agency for International Development — USAID Software Database reporting tool created in Oracle Application Express (APEX). This version provides read only access to a database view of the JIRA SAR...

  4. The CEBAF Element Database and Related Operational Software

    Energy Technology Data Exchange (ETDEWEB)

    Larrieu, Theodore [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Slominski, Christopher [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Keesee, Marie [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Turner, Dennison [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Joyce, Michele [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)

    2015-09-01

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  5. The 7 C's for Creating Living Software: A Research Perspective for Quality-Oriented Software Engineering

    NARCIS (Netherlands)

    Aksit, Mehmet

    2004-01-01

    This article proposes the 7 C's for realizing quality-oriented software engineering practices. All the desired qualities of this approach are expressed in short by the term living software. The 7 C's are: Concern-oriented processes, Canonical models, Composable models, Certifiable models,

  6. Software - Naval Oceanography Portal

    Science.gov (United States)

    are here: Home › USNO › Earth Orientation › Software USNO Logo USNO Navigation Earth Orientation Products GPS-based Products VLBI-based Products EO Information Center Publications about Products Software Search databases Auxiliary Software Supporting Software Form Folder Earth Orientation Matrix Calculator

  7. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  8. Study of relational nuclear databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Guo Zhiyu; Liu Wenlong; Ye Weiguo; Feng Yuqing; Song Xiangxiang; Huang Gang; Hong Yingjue; Liu Tinjin; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Liu Chi; Chen Jiaer; Huang Xiaolong

    2004-01-01

    A relational nuclear database management and web-based services software system has been developed. Its objective is to allow users to access numerical and graphical representation of nuclear data and to easily reconstruct nuclear data in original standardized formats from the relational databases. It presents 9 relational nuclear libraries: 5 ENDF format neutron reaction databases (BROND), CENDL, ENDF, JEF and JENDL), the ENSDF database, the EXFOR database, the IAEA Photonuclear Data Library and the charged particle reaction data from the FENDL database. The computer programs providing support for database management and data retrievals are based on the Linux implementation of PHP and the MySQL software, and are platform-independent. The first version of this software was officially released in September 2001

  9. NIMS structural materials databases and cross search engine - MatNavi

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  10. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  11. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  12. COMPOSE-HPC: A Transformational Approach to Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E [ORNL; Allan, Benjamin A. [Sandia National Laboratories (SNL); Armstrong, Robert C. [Sandia National Laboratories (SNL); Chavarria-Miranda, Daniel [Pacific Northwest National Laboratory (PNNL); Dahlgren, Tamara L. [Lawrence Livermore National Laboratory (LLNL); Elwasif, Wael R [ORNL; Epperly, Tom [Lawrence Livermore National Laboratory (LLNL); Foley, Samantha S [ORNL; Hulette, Geoffrey C. [Sandia National Laboratories (SNL); Krishnamoorthy, Sriram [Pacific Northwest National Laboratory (PNNL); Prantl, Adrian [Lawrence Livermore National Laboratory (LLNL); Panyala, Ajay [Louisiana State University; Sottile, Matthew [Galois, Inc.

    2012-04-01

    The goal of the COMPOSE-HPC project is to 'democratize' tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, which include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.

  13. Updates on resources, software tools, and databases for plant proteomics in 2016-2017.

    Science.gov (United States)

    Misra, Biswapriya B

    2018-02-08

    Proteomics data processing, annotation, and analysis can often lead to major hurdles in large-scale high-throughput bottom-up proteomics experiments. Given the recent rise in protein-based big datasets being generated, efforts in in silico tool development occurrences have had an unprecedented increase; so much so, that it has become increasingly difficult to keep track of all the advances in a particular academic year. However, these tools benefit the plant proteomics community in circumventing critical issues in data analysis and visualization, as these continually developing open-source and community-developed tools hold potential in future research efforts. This review will aim to introduce and summarize more than 50 software tools, databases, and resources developed and published during 2016-2017 under the following categories: tools for data pre-processing and analysis, statistical analysis tools, peptide identification tools, databases and spectral libraries, and data visualization and interpretation tools. Intended for a well-informed proteomics community, finally, efforts in data archiving and validation datasets for the community will be discussed as well. Additionally, the author delineates the current and most commonly used proteomics tools in order to introduce novice readers to this -omics discovery platform. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The Effect of Superstar Software on Hardware Sales in System Markets

    OpenAIRE

    Binken, Jeroen; Stremersch, Stefan

    2008-01-01

    textabstractSystems are composed of complementary products (e.g., video game systems are composed of the video game console and video games). Prior literature on indirect network effects argues that, in system markets, sales of the primary product (often referred to as "hardware") largely depend on the availability of complementary products (often referred to as "software"). Mathematical and empirical analyses have almost exclusively operationalized software availability as software quantity....

  15. High-Level software requirements specification for the TWRS controlled baseline database system

    International Nuclear Information System (INIS)

    Spencer, S.G.

    1998-01-01

    This Software Requirements Specification (SRS) is an as-built document that presents the Tank Waste Remediation System (TWRS) Controlled Baseline Database (TCBD) in its current state. It was originally known as the Performance Measurement Control System (PMCS). Conversion to the new system name has not occurred within the current production system. Therefore, for simplicity, all references to TCBD are equivalent to PMCS references. This SRS will reference the PMCS designator from this point forward to capture the as-built SRS. This SRS is written at a high-level and is intended to provide the design basis for the PMCS. The PMCS was first released as the electronic data repository for cost, schedule, and technical administrative baseline information for the TAAS Program. During its initial development, the PMCS was accepted by the customer, TARS Business Management, with no formal documentation to capture the initial requirements

  16. Software configuration management plan for the TWRS controlled baseline database system [TCBD

    International Nuclear Information System (INIS)

    Spencer, S.G.

    1998-01-01

    LHMC, TWRS Business Management Organization (BMO) is designated as system owner, operator, and maintenance authority. The TWAS BMO identified the need for the TCBD. The TWRS BMO users have established all requirements for the database and are responsible for maintaining database integrity and control (after the interface data has been received). Initial interface data control and integrity is maintained through functional and administrative processes and is the responsibility of the database owners who are providing the data. The specific groups within the TWRS BMO affected by this plan are the Financial Management and TWRS Management Support Project, Master Planning, and the Financial Control Integration and Reporting. The interfaces between these organizations are through normal line management chain of command. The Master Planning Group is assigned the responsibility to continue development and maintenance of the TCBD. This group maintains information that includes identification of requirements and changes to those requirements in a TCBD project file. They are responsible for the issuance, maintenance, and change authority of this SCW. LHMC, TWRS TCBD Users are designated as providing the project's requirement changes for implementation and also testing of the TCBD during development. The Master Planning Group coordinates and monitors the user's requests for system requirements (new/existing) as well as beta and acceptance testing. Users are those individuals and organizations needing data or information from the TCBD and having both a need-to-know and the proper training and authority to access the database. Each user or user organization is required to comply with the established requirements and procedures governing the TCBD. Lockheed Martin Services, Inc. (LMSI) is designated the TCBD developer, maintainer, and custodian until acceptance and process testing of the system has been completed via the TWRS BMO. Once this occurs, the TCBD will be completed and

  17. Composable Mission Framework for Rapid End-to-End Mission Design and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation proposed here is the Composable Mission Framework (CMF) a model-based software framework that shall enable seamless continuity of mission design and...

  18. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    Science.gov (United States)

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  19. A study on relational ENSDF databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Song Xiangxiang; Ye Weiguo; Liu Wenlong; Feng Yuqing; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Guo Zhiyu; Huang Xiaolong; Liu Tingjin; China Inst. of Atomic Energy, Beijing

    2007-01-01

    A relational ENSDF library software is designed and released. Using relational databases, object-oriented programming and web-based technology, this software offers online data services of a centralized repository of data, including international ENSDF files for nuclear structure and decay data. The software can easily reconstruct nuclear data in original ENSDF format from the relational database. The computer programs providing support for database management and online data services via the Internet are based on the Linux implementation of PHP and the MySQL software, and platform independent in a wider sense. (authors)

  20. Olap Cube Representation For Objectoriented Database

    OpenAIRE

    Vipin Saxena; Ajay Pratap

    2012-01-01

    In the current scenario, the size of database related to any organization is rapidly increasing and due to evolution of the object-oriented approach, many of the Software Industries are converting the old structured approach based softwares into the object-oriented based softwares. Therefore, for the largeamount of database, it is necessary to study the faster retrieval system as On-Line Analytical Processing (OLAP) which was introduced by E.Codd in 1993. The present paper is an attempt in t...

  1. Does filler database size influence identification accuracy?

    Science.gov (United States)

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. My Career: Composer

    Science.gov (United States)

    Morganelli, Patrick

    2013-01-01

    In this article, the author talks about his career as a composer and offers some advice for aspiring composers. The author works as a composer in the movie industry, creating music that supports a film's story. Other composers work on television shows, and some do both television and film. The composer uses music to tell the audience what kind of…

  3. Software Application for Supporting the Education of Database Systems

    Science.gov (United States)

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  4. Design of database management system for 60Co container inspection system

    International Nuclear Information System (INIS)

    Liu Jinhui; Wu Zhifang

    2007-01-01

    The function of the database management system has been designed according to the features of cobalt-60 container inspection system. And the software related to the function has been constructed. The database querying and searching are included in the software. The database operation program is constructed based on Microsoft SQL server and Visual C ++ under Windows 2000. The software realizes database querying, image and graph displaying, statistic, report form and its printing, interface designing, etc. The software is powerful and flexible for operation and information querying. And it has been successfully used in the real database management system of cobalt-60 container inspection system. (authors)

  5. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Integrating query of relational and textual data in clinical databases: a case study.

    Science.gov (United States)

    Fisk, John M; Mutalik, Pradeep; Levin, Forrest W; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Results are relevance-ranked using either "total documents per patient" or "report type weighting." Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers.

  7. Fuel element database: developer handbook; Entwicklerhandbuch zur Brennelement-Datenbank

    Energy Technology Data Exchange (ETDEWEB)

    Dragicevic, M [Atominstitut der Oesterreichischen Universitaeten (Austria)

    2004-09-15

    The fuel elements database which was developed for Atomic Institute of the Austrian Universities is described. The software uses standards like HTML, PHP and SQL. For the standard installation freely available software packages such as MySQL database or the PHP interpreter from Apache Software Foundation and Java Script were used. (nevyjel)

  8. Teaching Composing with an Identity as a Teacher-Composer

    Science.gov (United States)

    Francis, Jennie

    2012-01-01

    I enjoy composing and feel able to write songs that I like and which feel significant to me. This has not always been the case and the change had nothing to do with my school education or my degree. Composing at secondary school did not move beyond Bach and Handel pastiche. I did not take any composing courses during my degree. What did influence…

  9. Multibiodose radiation emergency triage categorization software.

    Science.gov (United States)

    Ainsbury, Elizabeth A; Barnard, Stephen; Barrios, Lleonard; Fattibene, Paola; de Gelder, Virginie; Gregoire, Eric; Lindholm, Carita; Lloyd, David; Nergaard, Inger; Rothkamm, Kai; Romm, Horst; Scherthan, Harry; Thierens, Hubert; Vandevoorde, Charlot; Woda, Clemens; Wojcik, Andrzej

    2014-07-01

    In this note, the authors describe the MULTIBIODOSE software, which has been created as part of the MULTIBIODOSE project. The software enables doses estimated by networks of laboratories, using up to five retrospective (biological and physical) assays, to be combined to give a single estimate of triage category for each individual potentially exposed to ionizing radiation in a large scale radiation accident or incident. The MULTIBIODOSE software has been created in Java. The usage of the software is based on the MULTIBIODOSE Guidance: the program creates a link to a single SQLite database for each incident, and the database is administered by the lead laboratory. The software has been tested with Java runtime environment 6 and 7 on a number of different Windows, Mac, and Linux systems, using data from a recent intercomparison exercise. The Java program MULTIBIODOSE_1.0.jar is freely available to download from http://www.multibiodose.eu/software or by contacting the software administrator: MULTIBIODOSE-software@gmx.com.

  10. Computerized nuclear material database management system for power reactors

    International Nuclear Information System (INIS)

    Cheng Binghao; Zhu Rongbao; Liu Daming; Cao Bin; Liu Ling; Tan Yajun; Jiang Jincai

    1994-01-01

    The software packages for nuclear material database management for power reactors are described. The database structure, data flow and model for management of the database are analysed. Also mentioned are the main functions and characterizations of the software packages, which are successfully installed and used at both the Daya Bay Nuclear Power Plant and the Qinshan Nuclear Power Plant for the purposed of handling nuclear material database automatically

  11. Composing and synchronizing real-time software through virtual platforms in vehicular systems

    NARCIS (Netherlands)

    Van Den Heuvel, M.M.H.P.

    2016-01-01

    This paper gives an overview of the challenges we faced when integrating software components on an electronic control unit (ECU) embedded in a car. The results show management of scarce ECU resources and a demonstration of temporal isolation between components in an industrial case study.

  12. CO2 line-mixing database and software update and its tests in the 2.1 μm and 4.3 μm regions

    International Nuclear Information System (INIS)

    Lamouroux, J.; Régalia, L.; Thomas, X.; Vander Auwera, J.; Gamache, R.R.; Hartmann, J.-M.

    2015-01-01

    An update of the former version of the database and software for the calculation of CO 2 –air absorption coefficients taking line-mixing into account [Lamouroux et al. J Quant Spectrosc Radiat Transf 2010;111:2321] is described. In this new edition, the data sets were constructed using parameters from the 2012 version of the HITRAN database and recent measurements of line-shape parameters. Among other improvements, speed-dependent profiles can now be used if line-mixing is treated within the first order approximation. This new package is tested using laboratory spectra measured in the 2.1 μm and 4.3 μm spectral regions for various pressures, temperatures and CO 2 concentration conditions. Despite improvements at 4.3 μm at room temperature, the conclusions on the quality of this update are more ambiguous at low temperature and in the 2.1 μm region. Further tests using laboratory and atmospheric spectra are thus required for the evaluation of the performances of this updated package. - Highlights: • High resolution infrared spectroscopy. • CO 2 in air. • Updated tools. • Line mixing database and software

  13. Development of IAEA nuclear reaction databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Zerkin, V.; Trkov, A. [International Atomic Energy Agency, Dept. of Nuclear Sciences and Applications, Vienna (Austria)

    2008-07-01

    From mid-2004 onwards, the major nuclear reaction databases (EXFOR, CINDA and Endf) and services (Web and CD-Roms retrieval systems and specialized applications) have been functioning within a modern computing environment as multi-platform software, working under several operating systems with relational databases. Subsequent work at the IAEA has focused on three areas of development: revision and extension of the contents of the databases; extension and improvement of the functionality and integrity of the retrieval systems; development of software for database maintenance and system deployment. (authors)

  14. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  15. Medical Database for the Atomic-Bomb Survivors at Nagasaki University

    OpenAIRE

    Mori, Hiroyuki; Mine, Mariko; Kondo, Hisayoshi; Okumura, Yutaka

    1992-01-01

    The Scientific Data Center for Atomic-Bomb Disasters at Nagasaki University was established in 1974. The database of atomicbomb survivors has been in operation since 1977. The database is composed of following 6 physical database : (1) Fundamental information database. (2) Atomic-Bomb Hospital database, (3) Pathological database, (4) Household reconstruction database, (5) Second generation database, and (6) Address database. We review the current contents of the database for its further appli...

  16. Database in Artificial Intelligence.

    Science.gov (United States)

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  17. Content And Multimedia Database Management Systems

    NARCIS (Netherlands)

    de Vries, A.P.

    1999-01-01

    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data

  18. In Search of the Philosopher's Stone: Simulation Composability Versus Component-Based Software Design

    National Research Council Canada - National Science Library

    Bartholet, Robert G; Brogan, David C; Reynolds, Jr., Paul F; Carnahan, Joseph C

    2004-01-01

    The simulation community and the software engineering community are actively conducting research on technology that will make it possible to easily build complex systems by combining existing components...

  19. Software for radiation protection

    International Nuclear Information System (INIS)

    Graffunder, H.

    2002-01-01

    The software products presented are universally usable programs for radiation protection. The systems were designed in order to establish a comprehensive database specific to radiation protection and, on this basis, model in programs subjects of radiation protection. Development initially focused on the creation of the database. Each software product was to access the same nuclide-specific data; input errors and differences in spelling were to be excluded from the outset. This makes the products more compatible with each other and able to exchange data among each other. The software products are modular in design. Functions recurring in radiation protection are always treated the same way in different programs, and also represented the same way on the program surface. The recognition effect makes it easy for users to familiarize with the products quickly. All software products are written in German and are tailored to the administrative needs and codes and regulations in Germany and in Switzerland. (orig.) [de

  20. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  1. Content independence in multimedia databases

    NARCIS (Netherlands)

    A.P. de Vries (Arjen)

    2001-01-01

    textabstractA database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for

  2. Development methodology for the software life cycle process of the safety software

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. H.; Lee, S. S. [BNF Technology, Taejon (Korea, Republic of); Cha, K. H.; Lee, C. S.; Kwon, K. C.; Han, H. B. [KAERI, Taejon (Korea, Republic of)

    2002-05-01

    A methodology for developing software life cycle processes (SLCP) is proposed to develop the digital safety-critical Engineered Safety Features - Component Control System (ESF-CCS) successfully. A software life cycle model is selected as the hybrid model mixed with waterfall, prototyping, and spiral models and is composed of two stages , development stages of prototype of ESF-CCS and ESF-CCS. To produce the software life cycle (SLC) for the Development of the Digital Reactor Safety System, the Activities referenced in IEEE Std. 1074-1997 are mapped onto the hybrid model. The SLCP is established after the available OPAs (Organizational Process Asset) are applied to the SLC Activities, and the known constraints are reconciled. The established SLCP describes well the software life cycle activities with which the Regulatory Authority provides.

  3. Development methodology for the software life cycle process of the safety software

    International Nuclear Information System (INIS)

    Kim, D. H.; Lee, S. S.; Cha, K. H.; Lee, C. S.; Kwon, K. C.; Han, H. B.

    2002-01-01

    A methodology for developing software life cycle processes (SLCP) is proposed to develop the digital safety-critical Engineered Safety Features - Component Control System (ESF-CCS) successfully. A software life cycle model is selected as the hybrid model mixed with waterfall, prototyping, and spiral models and is composed of two stages , development stages of prototype of ESF-CCS and ESF-CCS. To produce the software life cycle (SLC) for the Development of the Digital Reactor Safety System, the Activities referenced in IEEE Std. 1074-1997 are mapped onto the hybrid model. The SLCP is established after the available OPAs (Organizational Process Asset) are applied to the SLC Activities, and the known constraints are reconciled. The established SLCP describes well the software life cycle activities with which the Regulatory Authority provides

  4. Thyroid uptake software

    International Nuclear Information System (INIS)

    Alonso, Dolores; Arista, Eduardo

    2003-01-01

    The DETEC-PC software was developed as a complement to a measurement system (hardware) able to perform Iodine Thyroid Uptake studies. The software was designed according to the principles of Object oriented programming using C++ language. The software automatically fixes spectrometric measurement parameters and besides patient measurement also performs statistical analysis of a batch of samples. It possesses a PARADOX database with all information of measured patients and a help system with the system options and medical concepts related to the thyroid uptake study

  5. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  6. Development of a Nevada Statewide Database for Safety Analyst Software

    Science.gov (United States)

    2017-02-02

    Safety Analyst is a software package developed by the Federal Highway Administration (FHWA) and twenty-seven participating state and local agencies including the Nevada Department of Transportation (NDOT). The software package implemented many of the...

  7. Composing and Arranging Careers

    Science.gov (United States)

    Schwartz, Elliott; And Others

    1977-01-01

    With the inspiration, the originality, the skill and craftsmanship, the business acumen, the patience, and the luck, it's possible to become a classical composer, pop/rock/country composer, jingle composer, or educational composer. Describes these careers. (Editor/RK)

  8. An online database for plant image analysis software tools

    OpenAIRE

    Lobet, Guillaume; Draye, Xavier; Périlleux, Claire

    2013-01-01

    Background: Recent years have seen an increase in methods for plant phenotyping using image analyses. These methods require new software solutions for data extraction and treatment. These solutions are instrumental in supporting various research pipelines, ranging from the localisation of cellular compounds to the quantification of tree canopies. However, due to the variety of existing tools and the lack of central repository, it is challenging for researchers to identify the software that is...

  9. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  10. SIAPEM - Brazilian Software Database for Multiple Sclerosis ...

    African Journals Online (AJOL)

    Resultados: Se utiliza con el programa ACCESS 2000, y es un banco de datos de software que permite obtener resultados en forma inmediata de los datos entrados, no importando el número de pacientes existente, con un diseño simple y práctico. Conclusiones: Es un proceso que ahorra tiempo y permite mantener una ...

  11. Portable database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig

  12. Portable database driven control system for SPEAR

    Energy Technology Data Exchange (ETDEWEB)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig.

  13. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  14. Application of Software Safety Analysis Methods

    International Nuclear Information System (INIS)

    Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C.; Lee, S. J.; Koo, Y. H.

    2009-01-01

    A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)

  15. Software Engineering Improvement Plan

    Science.gov (United States)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  16. Interface-based software integration

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2016-07-01

    Full Text Available Enterprise architecture frameworks define the goals of enterprise architecture in order to make business processes and IT operations more effective, and to reduce the risk of future investments. These enterprise architecture frameworks offer different architecture development methods that help in building enterprise architecture. In practice, the larger organizations become, the larger their enterprise architecture and IT become. This leads to an increasingly complex system of enterprise architecture development and maintenance. Application software architecture is one type of architecture that, along with business architecture, data architecture and technology architecture, composes enterprise architecture. From the perspective of integration, enterprise architecture can be considered a system of interaction between multiple examples of application software. Therefore, effective software integration is a very important basis for the future success of the enterprise architecture in question. This article will provide interface-based integration practice in order to help simplify the process of building such a software integration system. The main goal of interface-based software integration is to solve problems that may arise with software integration requirements and developing software integration architecture.

  17. KVANE - a Kvanefjeld drill core database

    International Nuclear Information System (INIS)

    Lund Clausen, F.

    1980-01-01

    A database KVANE containing all drill core information from the drilling programme carried out in 1958, 1962, 1969 and 1977 at the uranium deposit in Kvanefjeld, Southwest Greenland has been made. The applicaTion software ''Statistical Analysis System (SAS)'' was used as the programming tool. It is shown how this software, usually used for other purposes, satisfy a demand of easy storing of larger data amounts. The paper describes how KVANE was made and organized and how data can be picked out of the database. A short introduction to the SAS system is also given. The database has been implemented at the Northern European University Computing Center (NEUCC) at the Technical University of Denmark. (author)

  18. Hospital Management Software Development

    OpenAIRE

    sobogunGod, olawale

    2012-01-01

    The purpose of this thesis was to implement a hospital management software which is suitable for small private hospitals in Nigeria, especially for the ones that use a file based system for storing information rather than having it stored in a more efficient and safer environment like databases or excel programming software. The software developed within this thesis project was specifically designed for the Rainbow specialist hospital which is based in Lagos, the commercial neurological cente...

  19. Database for waste glass composition and properties

    International Nuclear Information System (INIS)

    Peters, R.D.; Chapman, C.C.; Mendel, J.E.; Williams, C.G.

    1993-09-01

    A database of waste glass composition and properties, called PNL Waste Glass Database, has been developed. The source of data is published literature and files from projects funded by the US Department of Energy. The glass data have been organized into categories and corresponding data files have been prepared. These categories are glass chemical composition, thermal properties, leaching data, waste composition, glass radionuclide composition and crystallinity data. The data files are compatible with commercial database software. Glass compositions are linked to properties across the various files using a unique glass code. Programs have been written in database software language to permit searches and retrievals of data. The database provides easy access to the vast quantities of glass compositions and properties that have been studied. It will be a tool for researchers and others investigating vitrification and glass waste forms

  20. NNDC database migration project

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, Thomas W; Dunford, Charles L [U.S. Department of Energy, Brookhaven Science Associates (United States)

    2004-03-01

    NNDC Database Migration was necessary to replace obsolete hardware and software, to be compatible with the industry standard in relational databases (mature software, large base of supporting software for administration and dissemination and replication and synchronization tools) and to improve the user access in terms of interface and speed. The Relational Database Management System (RDBMS) consists of a Sybase Adaptive Server Enterprise (ASE), which is relatively easy to move between different RDB systems (e.g., MySQL, MS SQL-Server, or MS Access), the Structured Query Language (SQL) and administrative tools written in Java. Linux or UNIX platforms can be used. The existing ENSDF datasets are often VERY large and will need to be reworked and both the CRP (adopted) and CRP (Budapest) datasets give elemental cross sections (not relative I{gamma}) in the RI field (so it is not immediately obvious which of the old values has been changed). But primary and secondary intensities are now available on the same scale. The intensity normalization has been done for us. We will gain access to a large volume of data from Budapest and some of those gamma-ray intensity and energy data will be superior to what we already have.

  1. Using Geocoded Databases in Teaching Urban Historical Geography.

    Science.gov (United States)

    Miller, Roger P.

    1986-01-01

    Provides information regarding hardware and software requirements for using geocoded databases in urban historical geography. Reviews 11 IBM and Apple Macintosh database programs and describes the pen plotter and digitizing table interface used with the databases. (JDH)

  2. Development of a voice database to aid children with hearing impairments

    International Nuclear Information System (INIS)

    Kuzman, M G; Agüero, P D; Tulli, J C; Gonzalez, E L; Cervellini, M P; Uriz, A J

    2011-01-01

    In the development of software for voice analysis or training, for people with hearing impairments, a database having sounds of properly pronounced words is of paramount importance. This paper shows the advantage that will be obtained from getting an own voice database, rather than using those coming from other countries, even having the same language, in the development of speech training software aimed to people with hearing impairments. This database will be used by software developers at the School of Engineering of Mar del Plata National University.

  3. Auditing the Functional Part of the CAS Software

    Directory of Open Access Journals (Sweden)

    Adamyk Oksana V.

    2017-11-01

    Full Text Available The article is aimed at determining the order and methodology of auditing the functional component of the software for an computer accounting system (CAS. It has been found that software auditing should be performed separately for each of its components. The components of the functional part of the CAS software are the database management system (DBMS and the application software supporting the accountance automation. For auditing of the first component part are used such techniques as general evaluation, subject check of the embedded algorithms of information processing. Auditing the client software algorithms is carried out by means of the control data method, which is reduced to such procedures as creation of another database of test data with imaginary objects and its processing by the client program, as well as introduction in a copy of the real database of imaginary objects (employees, creditors, material values and the formation of reporting. Not only the current methods of calculation or evaluation of accounting objects, but all of the software, are subject to mandatory verification. This will avoid errors if the enterprise accounting policy changes.

  4. Toward automating the database design process

    International Nuclear Information System (INIS)

    Asprey, P.L.

    1979-01-01

    One organization's approach to designing complex, interrelated databases is described. The problems encountered and the techniques developed are discussed. A set of software tools to aid the designer and to produce an initial database design directly is presented. 5 figures

  5. Integrated conception of hardware/software mixed systems used in nuclear instrumentation

    International Nuclear Information System (INIS)

    Dias, Ailton F.; Sorel, Yves; Akil, Mohamed

    2002-01-01

    Hardware/software codesign carries out the design of systems composed by a hardware portion, with specific components, and a software portion, with microprocessor based architecture. This paper describes the Algorithm Architecture Adequation (AAA) design methodology - originally oriented to programmable multicomponent architectures, its extension to reconfigurable circuits and its application to design and development of nuclear instrumentation systems composed by programmable and configurable circuits. AAA methodology uses an unified model to describe algorithm, architecture and implementation, based on graph theory. The great advantage of AAA methodology is the utilization of a same model from the specification to the implementation of hardware/software systems, reducing the complexity and design time. (author)

  6. MetaboSearch: tool for mass-based metabolite identification using multiple databases.

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    Full Text Available Searching metabolites against databases according to their masses is often the first step in metabolite identification for a mass spectrometry-based untargeted metabolomics study. Major metabolite databases include Human Metabolome DataBase (HMDB, Madison Metabolomics Consortium Database (MMCD, Metlin, and LIPID MAPS. Since each one of these databases covers only a fraction of the metabolome, integration of the search results from these databases is expected to yield a more comprehensive coverage. However, the manual combination of multiple search results is generally difficult when identification of hundreds of metabolites is desired. We have implemented a web-based software tool that enables simultaneous mass-based search against the four major databases, and the integration of the results. In addition, more complete chemical identifier information for the metabolites is retrieved by cross-referencing multiple databases. The search results are merged based on IUPAC International Chemical Identifier (InChI keys. Besides a simple list of m/z values, the software can accept the ion annotation information as input for enhanced metabolite identification. The performance of the software is demonstrated on mass spectrometry data acquired in both positive and negative ionization modes. Compared with search results from individual databases, MetaboSearch provides better coverage of the metabolome and more complete chemical identifier information.The software tool is available at http://omics.georgetown.edu/MetaboSearch.html.

  7. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  8. 73 Industrial symbiosis software: software and method to facilitate industrial symbiosis

    Directory of Open Access Journals (Sweden)

    Immanuel Geesing

    2017-12-01

    Full Text Available Bedrijven gaan tegenwoordig vooral lineair om met hun materialen. Grondstoffen komen binnen, worden verwerkt tot producten en hun afval wordt afgevoerd. Binnen één bedrijf ziet dit er logisch uit, maar in een systeem van meerdere bedrijven is te zien dat dit efficiënter kan. Als het afval van een bedrijf gebruikt zou worden als grondstof door een ander bedrijf, ontstaat er industriële symbiose. Daardoor worden er minder grondstoffen verbruikt en worden bruikbare restmaterialen niet verspild. Om bedrijven te helpen bij het zoeken naar partners voor zulke uitwisselingen, is de software InduSym ontwikkeld. Met de software kunnen bedrijven hun grondstoffen en restmateriaal invoeren in een database. Een algoritme doorzoekt deze database en presenteert in een rapport de kansen om te komen tot een symbiotische uitwisseling van reststromen.

  9. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    Science.gov (United States)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development

  10. Software: our quest for excellence. Honoring 50 years of software history, progress, and process

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    The Software Quality Forum was established by the Software Quality Assurance (SQA) Subcommittee, which serves as a technical advisory group on software engineering and quality initiatives and issues for DOE`s quality managers. The forum serves as an opportunity for all those involved in implementing SQA programs to meet and share ideas and concerns. Participation from managers, quality engineers, and software professionals provides an ideal environment for identifying and discussing issues and concerns. The interaction provided by the forum contributes to the realization of a shared goal--high quality software product. Topics include: testing, software measurement, software surety, software reliability, SQA practices, assessments, software process improvement, certification and licensing of software professionals, CASE tools, software project management, inspections, and management`s role in ensuring SQA. The bulk of this document consists of vugraphs. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.

  11. A protable Database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-01-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system

  12. Proceedings of the Twenty-Fourth Annual Software Engineering Workshop

    Science.gov (United States)

    2000-01-01

    On December 1 and 2, the Software Engineering Laboratory (SEL), a consortium composed of NASA/Goddard, the University of Maryland, and CSC, held the 24th Software Engineering Workshop (SEW), the last of the millennium. Approximately 240 people attended the 2-day workshop. Day 1 was composed of four sessions: International Influence of the Software Engineering Laboratory; Object Oriented Testing and Reading; Software Process Improvement; and Space Software. For the first session, three internationally known software process experts discussed the influence of the SEL with respect to software engineering research. In the Space Software session, prominent representatives from three different NASA sites- GSFC's Marti Szczur, the Jet Propulsion Laboratory's Rick Doyle, and the Ames Research Center IV&V Facility's Lou Blazy- discussed the future of space software in their respective centers. At the end of the first day, the SEW sponsored a reception at the GSFC Visitors' Center. Day 2 also provided four sessions: Using the Experience Factory; A panel discussion entitled "Software Past, Present, and Future: Views from Government, Industry, and Academia"; Inspections; and COTS. The day started with an excellent talk by CSC's Frank McGarry on "Attaining Level 5 in CMM Process Maturity." Session 2, the panel discussion on software, featured NASA Chief Information Officer Lee Holcomb (Government), our own Jerry Page (Industry), and Mike Evangelist of the National Science Foundation (Academia). Each presented his perspective on the most important developments in software in the past 10 years, in the present, and in the future.

  13. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    Science.gov (United States)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  14. SIGKit: a New Data-based Software for Learning Introductory Geophysics

    Science.gov (United States)

    Zhang, Y.; Kruse, S.; George, O.; Esmaeili, S.; Papadimitrios, K. S.; Bank, C. G.; Cadmus, A.; Kenneally, N.; Patton, K.; Brusher, J.

    2016-12-01

    Students of diverse academic backgrounds take introductory geophysics courses to learn the theory of a variety of measurement and analysis methods with the expectation to be able to apply their basic knowledge to real data. Ideally, such data is collected in field courses and also used in lecture-based courses because they provide a critical context for better learning and understanding of geophysical methods. Each method requires a separate software package for the data processing steps, and the complexity and variety of professional software makes the path through data processing to data interpretation a strenuous learning process for students and a challenging teaching task for instructors. SIGKit (Student Investigation of Geophysics Toolkit) being developed as a collaboration between the University of South Florida, the University of Toronto, and MathWorks intends to address these shortcomings by showing the most essential processing steps and allowing students to visualize the underlying physics of the various methods. It is based on MATLAB software and offered as an easy-to-use graphical user interface and packaged so it can run as an executable in the classroom and the field even on computers without MATLAB licenses. An evaluation of the software based on student feedback from focus-group interviews and think-aloud observations helps drive its development and refinement. The toolkit provides a logical gateway into the more sophisticated and costly software students will encounter later in their training and careers by combining essential visualization, modeling, processing, and analysis steps for seismic, GPR, magnetics, gravity, resistivity, and electromagnetic data.

  15. Requirements for the next generation of nuclear databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Pronyaev, Vladimir; Zerkin, Viktor; Muir, Douglas [International Atomic Energy Agency, Nuclear Data Section, Vienna (Austria); Winchell, David; Arcilla, Ramon [Brookhaven National Laboratory, National Nuclear Data Center, Upton, NY (United States)

    2002-08-01

    The use of relational database technology and general requirements for the next generation of nuclear databases and services are discussed. These requirements take into account an increased number of co-operating data centres working on diverse hardware and software platforms and users with different data-access capabilities. It is argued that the introduction of programming standards will allow the development of nuclear databases and data retrieval tools in a heterogeneous hardware and software environment. The functionality of this approach was tested with full-scale nuclear databases installed on different platforms having different operating and database management systems. User access through local network, internet, or CD-ROM has been investigated. (author)

  16. Composing Models of Geographic Physical Processes

    Science.gov (United States)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  17. Software quality: Process or people

    Science.gov (United States)

    Palmer, Regina; Labaugh, Modenna

    1993-01-01

    This paper will present data related to software development processes and personnel involvement from the perspective of software quality assurance. We examine eight years of data collected from six projects. Data collected varied by project but usually included defect and fault density with limited use of code metrics, schedule adherence, and budget growth information. The data are a blend of AFSCP 800-14 and suggested productivity measures in Software Metrics: A Practioner's Guide to Improved Product Development. A software quality assurance database tool, SQUID, was used to store and tabulate the data.

  18. CEBAF beam viewer imaging software

    International Nuclear Information System (INIS)

    Bowling, B.A.; McDowell, C.

    1993-01-01

    This paper discusses the various software used in the analysis of beam viewer images at CEBAF. This software, developed at CEBAF, includes a three-dimensional viewscreen calibration code which takes into account such factors as multiple camera/viewscreen rotations and perspective imaging, and maintaining a calibration database for each unit. Additional software allows single-button beam spot detection, with determination of beam location, width, and quality, in less than three seconds. Software has also been implemented to assist in the determination of proper chopper RF control parameters from digitized chopper circles, providing excellent results

  19. Structure and software tools of AIDA.

    Science.gov (United States)

    Duisterhout, J S; Franken, B; Witte, F

    1987-01-01

    AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write

  20. PostgreSQL database performance optimization

    OpenAIRE

    Wang, Qiang

    2011-01-01

    The thesis was request by Marlevo software Oy for a general description of the PostgreSQL database and its performance optimization technics. Its purpose was to help new PostgreSQL users to quickly understand the system and to assist DBAs to improve the database performance. The thesis was divided into two parts. The first part described PostgreSQL database optimization technics in theory. In additional popular tools were also introduced. This part was based on PostgreSQL documentation, r...

  1. Software for creating quality control database in diagnostic radiology

    International Nuclear Information System (INIS)

    Stoeva, M.; Spassov, G.; Tabakov, S.

    2000-01-01

    The paper describes a PC based program with database for quality control (QC). It keeps information about all surveyed equipment and measured parameters. The first function of the program is to extract information from old (existing) MS Excel spreadsheets with QC surveys. The second function is used for input of measurements which are automatically organized in MS Excel spreadsheets and built into the database. The spreadsheets are based on the protocols described in the EMERALD Training Scheme. In addition, the program can make statistics of all measured parameters, both in absolute term and in time

  2. PostgreSQL in the database landscape

    CERN Multimedia

    CERN. Geneva; Riggs, Simon

    2013-01-01

    This presentation targets the exposure of PostgreSQL and its main highlights in two parts: PostgreSQL today, by Harald Armin Massa This will explore the functionalities and capabilities of PostgreSQL; point out differences to other available databases; give information about the PostgreSQL project how it ensures the quality of this software. PostgreSQL and Extremely Large Databases, by Simon Riggs presenting an outlook on what is happening with PostgreSQL and Extremely Large Databases. About the speakers Simon Riggs is founder and CTO of 2ndQuadrant. He is working in the AXLE project. He works as an Architect and Developer of new features for PostgreSQL, setting technical directions for 2ndQuadrant and as a Database Systems Architect for 2ndQuadrant customers. Simon is the author of PostgreSQL 9 Admin Cookbook; and a committer to the PostgreSQL project. Harald Armin Massa studied computers and economics, he's self employed since 1999, doing software development in Python and ...

  3. Assesment of access to bibliographic databases and telemetry databases in Astronomy: A groundswell for development.

    Science.gov (United States)

    Diaz-Merced, Wanda Liz; Casado, Johanna; Garcia, Beatriz; Aarnio, Alicia; Knierman, Karen; Monkiewicz, Jacqueline; Alicia Aarnio.

    2018-01-01

    Big Data" is a subject that has taken special relevance today, particularly in Astrophysics, where continuous advances in technology are leading to ever larger data sets. A multimodal approach in perception of astronomical data data (achieved through sonification used for the processing of data) increases the detection of signals in very low signal-to-noise ratio limits and is of special importance to achieve greater inclusion in the field of Astronomy. In the last ten years, different software tools have been developed that perform the sonification of astronomical data from tables or databases, among them the best known and in multiplatform development are Sonification Sandbox, MathTrack, and xSonify.In order to determine the accessibility of software we propose to start carrying out a conformity analysis of ISO (International Standard Organization) 9241-171171: 2008. This standard establishes the general guidelines that must be taken into account for accessibility in software design, and it is applied to software used in work, public places, and at home. To analyze the accessibility of web databases, we take into account the "Web Content Content Accessibility Guidelines (WCAG) 2.0", accepted and published by ISO in the ISO / IEC 40500: 2012 standard.In this poster, we present a User Centered Design (UCD), Human Computer Interaction (HCI), and User Experience (UX) framework to address a non-segregational provision of access to bibliographic databases and telemetry databases in Astronomy. Our framework is based on an ISO evaluation on a selection of data bases such as ADS, Simbad and SDSS. The WCAG 2.0 and ISO 9241-171171: 2008 should not be taken as absolute accessibility standards: these guidelines are very general, are not absolute, and do not address particularities. They are not to be taken as a substitute for UCD, HCI, UX design and evaluation. Based on our results, this research presents the framework for a focus group and qualitative data analysis aimed to

  4. SEER*Stat Software

    Science.gov (United States)

    If you have access to SEER Research Data, use SEER*Stat to analyze SEER and other cancer-related databases. View individual records and produce statistics including incidence, mortality, survival, prevalence, and multiple primary. Tutorials and related analytic software tools are available.

  5. Harmonic calculation software for industrial applications with ASDs

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Asiminoaei, Lucian; Hansen, Steffan

    2007-01-01

    This article describes the evaluation of new harmonic calculation software. By using a combination of a prestored database and new interpolation techniques the software can provide the harmonic data on real applications of a very fast speed. The harmonic results obtained with this software have a...

  6. Software database creature for investment property measurement according to international standards

    Science.gov (United States)

    Ponomareva, S. V.; Merzliakova, N. A.

    2018-05-01

    The article deals with investment property measurement and accounting problems at the international, national and enterprise levels. The need to create the software for investment property measurement according to International Accounting Standards was substantiated. The necessary software functions and the processes were described.

  7. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    Science.gov (United States)

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  8. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  9. Updated database plus software for line-mixing in CO2 infrared spectra and their test using laboratory spectra in the 1.5-2.3 μm region

    International Nuclear Information System (INIS)

    Lamouroux, J.; Tran, H.; Laraia, A.L.; Gamache, R.R.; Rothman, L.S.; Gordon, I.E.; Hartmann, J.-M.

    2010-01-01

    In a previous series of papers, a model for the calculation of CO 2 -air absorption coefficients taking line-mixing into account and the corresponding database/software package were described and widely tested. In this study, we present an update of this package, based on the 2008 version of HITRAN, the latest currently available. The spectroscopic data for the seven most-abundant isotopologues are taken from HITRAN. When the HITRAN data are not complete up to J''=70, the data files are augmented with spectroscopic parameters from the CDSD-296 database and the high-temperature CDSD-1000 if necessary. Previously missing spectroscopic parameters, the air-induced pressure shifts and CO 2 line broadening coefficients with H 2 O, have been added. The quality of this new database is demonstrated by comparisons of calculated absorptions and measurements using CO 2 high-pressure laboratory spectra in the 1.5-2.3 μm region. The influence of the imperfections and inaccuracies of the spectroscopic parameters from the 2000 version of HITRAN is clearly shown as a big improvement of the residuals is observed by using the new database. The very good agreements between calculated and measured absorption coefficients confirm the necessity of the update presented here and further demonstrate the importance of line-mixing effects, especially for the high pressures investigated here. The application of the updated database/software package to atmospheric spectra should result in an increased accuracy in the retrieval of CO 2 atmospheric amounts. This opens improved perspectives for the space-borne detection of carbon dioxide sources and sinks.

  10. Using Biblio-Link...For Those Other Databases.

    Science.gov (United States)

    Joy, Albert

    1989-01-01

    Sidebar describes the use of the Biblio-Link software packages to download citations from online databases and convert them into a form that can be automatically uploaded into a Pro-Cite database. An example of this procedure using DIALOG2 is given. (CLB)

  11. Nuclear power plant control room crew task analysis database: SEEK system. Users manual

    International Nuclear Information System (INIS)

    Burgy, D.; Schroeder, L.

    1984-05-01

    The Crew Task Analysis SEEK Users Manual was prepared for the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission. It is designed for use with the existing computerized Control Room Crew Task Analysis Database. The SEEK system consists of a PR1ME computer with its associated peripherals and software augmented by General Physics Corporation SEEK database management software. The SEEK software programs provide the Crew Task Database user with rapid access to any number of records desired. The software uses English-like sentences to allow the user to construct logical sorts and outputs of the task data. Given the multiple-associative nature of the database, users can directly access the data at the plant, operating sequence, task or element level - or any combination of these levels. A complete description of the crew task data contained in the database is presented in NUREG/CR-3371, Task Analysis of Nuclear Power Plant Control Room Crews (Volumes 1 and 2)

  12. Computer Application Of Object Oriented Database Management ...

    African Journals Online (AJOL)

    Object Oriented Systems (OOS) have been widely adopted in software engineering because of their superiority with respect to data extensibility. The present trend in the software engineering process (SEP) towards concurrent computing raises novel concerns for the facilities and technology available in database ...

  13. The CMS ECAL database services for detector control and monitoring

    International Nuclear Information System (INIS)

    Arcidiacono, Roberta; Marone, Matteo; Badgett, William

    2010-01-01

    In this paper we give a description of the database services for the control and monitoring of the electromagnetic calorimeter of the CMS experiment at LHC. After a general description of the software infrastructure, we present the organization of the tables in the database, that has been designed in order to simplify the development of software interfaces. This feature is achieved including in the database the description of each relevant table. We also give some estimation about the final size and performance of the system.

  14. ASEAN Mineral Database and Information System (AMDIS)

    Science.gov (United States)

    Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.

    2014-12-01

    AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.

  15. An Oracle(c) database for the AMS experiment

    International Nuclear Information System (INIS)

    Boschini, M.; Gervasi, M.; Grandi, D.; Rancoita, P.G.; Trombetta, L.; Usoskin, I.G.

    1999-01-01

    We present hardware and software technologies implemented for the AMS Milano Data Center. Goal of the AMS Milano Data Center is to provide data collected during the STS-91 Space Shuttle flight to users and to provide a User Interface as well to manage the data properly. Data are stored in a database that provides high level query and retrieval features, the support being a magneto-optical juke-box. We describe the use of proprietary software (Oracle(c)) as well as custom-written software to enhance access performances. In particular we underscore the use of the Oracle Call Interfaces as a powerful tool to interface the database and the operating system in a natural way

  16. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  17. Developing an Inhouse Database from Online Sources.

    Science.gov (United States)

    Smith-Cohen, Deborah

    1993-01-01

    Describes the development of an in-house bibliographic database by the U.S. Army Corp of Engineers Cold Regions Research and Engineering Laboratory on arctic wetlands research. Topics discussed include planning; identifying relevant search terms and commercial online databases; downloading citations; criteria for software selection; management…

  18. A database for TMT interface control documents

    Science.gov (United States)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  19. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  20. Software for pipeline integrity administration

    Energy Technology Data Exchange (ETDEWEB)

    Soula, Gerardo; Perona, Lucas Fernandez [Gie SA., Buenos Aires (Argentina); Martinich, Carlos [Refinaria do Norte S. A. (REFINOR), Tartagal, Provincia de Salta (Argentina)

    2009-07-01

    A Software for 'pipeline integrity management' was developed. It allows to deal with Geographical Information and a PODS database (Pipeline Open database Standard) simultaneously, in a simple and reliable way. The premises for the design were the following: didactic, geo referenced, multiple reference systems. Program skills: 1.PODS+GIS: the PODS database in which the software is based on is completely integrated with the GIS module. 2 Management of different kinds of information: it allows to manage information on facilities, repairs, interventions, physical inspections, geographical characteristics, compliance with regulations, training, offline events, operation measures, O and M information treatment and importing specific data and studies in a massive way. It also assures the integrity of the loaded information. 3 Right of way survey: it allows to verify the class location, ROW occupation, sensitive areas identification and to manage landowners. 4 Risk analysis: it is done in a qualitative way, depending on the entered data, allowing the user to identify the riskiest stretches of the system. Either results from risk analysis, data and consultations made about the database, can be exported to standard formats. (author)

  1. Architecture of the software for LAMOST fiber positioning subsystem

    Science.gov (United States)

    Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin

    2004-09-01

    The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.

  2. From document to database: modernizing requirements management

    International Nuclear Information System (INIS)

    Giajnorio, J.; Hamilton, S.

    2007-01-01

    The creation, communication, and management of design requirements are central to the successful completion of any large engineering project, both technically and commercially. Design requirements in the Canadian nuclear industry are typically numbered lists in multiple documents created using word processing software. As an alternative, GE Nuclear Products implemented a central requirements management database for a major project at Bruce Power. The database configured the off-the-shelf software product, Telelogic Doors, to GE's requirements structure. This paper describes the advantages realized by this scheme. Examples include traceability from customer requirements through to test procedures, concurrent engineering, and automated change history. (author)

  3. National Software Reference Library (NSRL)

    Science.gov (United States)

    National Software Reference Library (NSRL) (PC database for purchase)   A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.

  4. Bicriterial Optimization of Software

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available There are defined two optimum criteria for software analysis. For each criterion there are defined solutions in order to reach a minimum level. There are analyzed the effects of pursuing one objective over the other one. There is developed an aggregate function for which it is determined the two criteria composed level. Based on this value it is selected the optimum solution

  5. International Inventory of Software Packages in the Information Field.

    Science.gov (United States)

    Keren, Carl, Ed.; Sered, Irina, Ed.

    Designed to provide guidance in selecting appropriate software for library automation, information storage and retrieval, or management of bibliographic databases, this inventory describes 188 computer software packages. The information was obtained through a questionnaire survey of 600 software suppliers and developers who were asked to describe…

  6. Popstjerne af lys, lyd og software

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2016-01-01

    Hatsune Miku is a 3D animated hologram, her voice is a vocaloid. In other words she is a software application. Nevertheless she is a worldstar with stadion concerts and an astronomical number of fans. She is a crowdsourced Internet phenomena: her fans composes her hits and choreographs her...

  7. DataSpread: Unifying Databases and Spreadsheets.

    Science.gov (United States)

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  8. Computer system for International Reactor Pressure Vessel Materials Database support

    International Nuclear Information System (INIS)

    Arutyunjan, R.; Kabalevsky, S.; Kiselev, V.; Serov, A.

    1997-01-01

    This report presents description of the computer tools for support of International Reactor Pressure Vessel Materials Database developed at IAEA. Work was focused on raw, qualified, processed materials data, search, retrieval, analysis, presentation and export possibilities of data. Developed software has the following main functions: provides software tools for querying and search of any type of data in the database; provides the capability to update the existing information in the database; provides the capability to present and print selected data; provides the possibility of export on yearly basis the run-time IRPVMDB with raw, qualified and processed materials data to Database members; provides the capability to export any selected sets of raw, qualified, processed materials data

  9. Programme RAE: software that automates data acquisition in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Bellido, Luis F.

    1995-07-01

    A software for automatic acquisition and storing of nuclear spectra was developed. This program is to be used in a system composed of a radiation detector, a Spectrum-ACE or ADCAM and the Maestro II emulation software. In this paper the operating mode is fully described and several examples are given. (author). 2 refs

  10. Extracting software static defect models using data mining

    Directory of Open Access Journals (Sweden)

    Ahmed H. Yousef

    2015-03-01

    Full Text Available Large software projects are subject to quality risks of having defective modules that will cause failures during the software execution. Several software repositories contain source code of large projects that are composed of many modules. These software repositories include data for the software metrics of these modules and the defective state of each module. In this paper, a data mining approach is used to show the attributes that predict the defective state of software modules. Software solution architecture is proposed to convert the extracted knowledge into data mining models that can be integrated with the current software project metrics and bugs data in order to enhance the prediction. The results show better prediction capabilities when all the algorithms are combined using weighted votes. When only one individual algorithm is used, Naïve Bayes algorithm has the best results, then the Neural Network and the Decision Trees algorithms.

  11. HAZARD ANALYSIS SOFTWARE

    International Nuclear Information System (INIS)

    Sommer, S; Tinh Tran, T.

    2008-01-01

    Washington Safety Management Solutions, LLC developed web-based software to improve the efficiency and consistency of hazard identification and analysis, control selection and classification, and to standardize analysis reporting at Savannah River Site. In the new nuclear age, information technology provides methods to improve the efficiency of the documented safety analysis development process which includes hazard analysis activities. This software provides a web interface that interacts with a relational database to support analysis, record data, and to ensure reporting consistency. A team of subject matter experts participated in a series of meetings to review the associated processes and procedures for requirements and standard practices. Through these meetings, a set of software requirements were developed and compiled into a requirements traceability matrix from which software could be developed. The software was tested to ensure compliance with the requirements. Training was provided to the hazard analysis leads. Hazard analysis teams using the software have verified its operability. The software has been classified as NQA-1, Level D, as it supports the analysis team but does not perform the analysis. The software can be transported to other sites with alternate risk schemes. The software is being used to support the development of 14 hazard analyses. User responses have been positive with a number of suggestions for improvement which are being incorporated as time permits. The software has enforced a uniform implementation of the site procedures. The software has significantly improved the efficiency and standardization of the hazard analysis process

  12. Property Modelling and Databases in Product-Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Sansonetti, Sascha

    of the PC-SAFT is used. The developed database and property prediction models have been combined into a properties-software that allows different product-process design related applications. The presentation will also briefly highlight applications of the software for virtual product-process design...

  13. Software for simulation of nuclear simulation of nuclear installations

    International Nuclear Information System (INIS)

    Castaneda, J.O.; Ramos, L.M.; Arjona, O.; Rodriguez, L.

    1993-01-01

    The software is an instrument to build conceptual-type simulators of low, medium and full scale for used in nuclear installations. The system is composed by composed by two basic modules: one for the edition and the other for the simulation. The first one allows to prepare the information to simulate: mathematical model, technological design (fundamentally, operation board or mnemotechnical design), parameters to be shown, failures to be simulated

  14. A59 Drum Activity database (DRUMAC): system documentation

    International Nuclear Information System (INIS)

    Keel, Alan.

    1993-01-01

    This paper sets out the requirements, database design, software module designs and test plans for DRUMAC (the Active handling Building Drum Activity Database) - a computer-based system to record the radiological inventory for LLW/ILW drums dispatched from the Active Handling Building. (author)

  15. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  16. On a Fuzzy Algebra for Querying Graph Databases

    OpenAIRE

    Pivert , Olivier; Thion , Virginie; Jaudoin , Hélène; Smits , Grégory

    2014-01-01

    International audience; This paper proposes a notion of fuzzy graph database and describes a fuzzy query algebra that makes it possible to handle such database, which may be fuzzy or not, in a flexible way. The algebra, based on fuzzy set theory and the concept of a fuzzy graph, is composed of a set of operators that can be used to express preference queries on fuzzy graph databases. The preferences concern i) the content of the vertices of the graph and ii) the structure of the graph. In a s...

  17. Diffusivity database (DDB) for major rocks. Database for the second progress report

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Haruo

    1999-10-01

    A database for diffusivity for a data setting of effective diffusion coefficients in rock matrices in the second progress report, was developed. In this database, 3 kinds of diffusion coefficients: effective diffusion coefficient (De), apparent diffusion coefficient (Da) and free water diffusion coefficient (Do) were treated. The database, based on literatures published between 1980 and 1998, was developed considering the following points. (1) Since Japanese geological environment is focused in the second progress report, data for diffusion are collected focused on Japanese major rocks. (2) Although 22 elements are considered to be important in performance assessment for geological disposal, all elements and aquatic tracers are treated in this database development considering general purpose. (3) Since limestone, which belongs to sedimentary rock, can become one of the natural resources and is inappropriate as a host rock, it is omitted in this database development. Rock was categorized into 4 kinds of rocks; acid crystalline rock, alkaline crystalline rock, sedimentary rock (argillaceous/tuffaceous rock) and sedimentary rock (psammitic rock/sandy stone) from the viewpoint of geology and mass transport. In addition, rocks around neutrality among crystalline rock were categorized into the alkaline crystalline rock in this database. The database is composed of sub-databases for 4 kinds of rocks. Furthermore, the sub-databases for 4 kinds of the rocks are composed of databases to individual elements, in which totally, 24 items such as species, rock name, diffusion coefficients (De, Da, Do), obtained conditions (method, porewater, pH, Eh, temperature, atmosphere, etc.), etc. are input. As a result of literature survey, for De values for acid crystalline rock, totally, 207 data for 18 elements and one tracer (hydrocarbon) have been reported and all data were for granitic rocks such as granite, granodiorite and biotitic granite. For alkaline crystalline rock, totally, 32

  18. TWRS technical baseline database manager definition document

    International Nuclear Information System (INIS)

    Acree, C.D.

    1997-01-01

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager

  19. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  20. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  1. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  2. European Vegetation Archive (EVA): an integrated database of European vegetation plots

    DEFF Research Database (Denmark)

    Chytrý, M; Hennekens, S M; Jiménez-Alfaro, B

    2015-01-01

    vegetation- plot databases on a single software platform. Data storage in EVA does not affect on-going independent development of the contributing databases, which remain the property of the data contributors. EVA uses a prototype of the database management software TURBOVEG 3 developed for joint management......The European Vegetation Archive (EVA) is a centralized database of European vegetation plots developed by the IAVS Working Group European Vegetation Survey. It has been in development since 2012 and first made available for use in research projects in 2014. It stores copies of national and regional...... data source for large-scale analyses of European vegetation diversity both for fundamental research and nature conservation applications. Updated information on EVA is available online at http://euroveg.org/eva-database....

  3. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  4. Adaptation of Black-Box Software Components

    Directory of Open Access Journals (Sweden)

    Rolf Andreas Rasenack

    2008-01-01

    Full Text Available The globalization of the software market leads to crucial problems for software companies. More competition between software companies arises and leads to the force on companies to develop ever newer software products in ever shortened time interval. Therefore the time to market for software systems is shortened and obviously the product life cycle is shortened too. Thus software companies shortened the time interval for research and development. Due to the fact of competition between software companies software products have to develop low-priced and this leads to a smaller return on investment. A big challenge for software companies is the use of an effective research and development process to have these problems under control. A way to control these problems can be the reuse of existing software components and adapt those software components to new functionality or accommodate mismatched interfaces. Complete redevelopment of software products is more expensive and time consuming than to develop software components. The approach introduced here presents novel technique together with a supportive environment that enables developers to cope with the adaptability of black-box software components. A supportive environment will be designed that checks the compatibility of black-box software components with the assistance of their specifications. Generated adapter software components can take over the part of adaptation and advance the functionality. Besides, a pool of software components can be used to compose an application to satisfy customer needs. Certainly this pool of software components consists of black-box software components and adapter software components which can be connected on demand.

  5. Composability in quantum cryptography

    International Nuclear Information System (INIS)

    Mueller-Quade, Joern; Renner, Renato

    2009-01-01

    If we combine two secure cryptographic systems, is the resulting system still secure? Answering this question is highly nontrivial and has recently sparked a considerable research effort, in particular, in the area of classical cryptography. A central insight was that the answer to the question is yes, but only within a well-specified composability framework and for carefully chosen security definitions. In this article, we review several aspects of composability in the context of quantum cryptography. The first part is devoted to key distribution. We discuss the security criteria that a quantum key distribution (QKD) protocol must fulfill to allow its safe use within a larger security application (e.g. for secure message transmission); and we demonstrate-by an explicit example-what can go wrong if conventional (non-composable) security definitions are used. Finally, to illustrate the practical use of composability, we show how to generate a continuous key stream by sequentially composing rounds of a QKD protocol. In the second part, we take a more general point of view, which is necessary for the study of cryptographic situations involving, for example, mutually distrustful parties. We explain the universal composability (UC) framework and state the composition theorem that guarantees that secure protocols can securely be composed to larger applications. We focus on the secure composition of quantum protocols into unconditionally secure classical protocols. However, the resulting security definition is so strict that some tasks become impossible without additional security assumptions. Quantum bit commitment is impossible in the UC framework even with mere computational security. Similar problems arise in the quantum bounded storage model and we observe a trade-off between the UC and the use of the weakest possible security assumptions.

  6. A Study on Graph Storage Database of NOSQL

    OpenAIRE

    Smita Agrawal; Atul Patel

    2016-01-01

    Big Data is used to store huge volume of both structured and unstructured data which is so large and is hard to process using current / traditional database tools and software technologies. The goal of Big Data Storage Management is to ensure a high level of data quality and availability for business intellect and big data analytics applications. Graph database which is not most popular NoSQL database compare to relational database yet but it is a most powerful NoSQL database which can handle...

  7. Selection of nuclear power information database management system

    International Nuclear Information System (INIS)

    Zhang Shuxin; Wu Jianlei

    1996-01-01

    In the condition of the present database technology, in order to build the Chinese nuclear power information database (NPIDB) in the nuclear industry system efficiently at a high starting point, an important task is to select a proper database management system (DBMS), which is the hinge of the matter to build the database successfully. Therefore, this article explains how to build a practical information database about nuclear power, the functions of different database management systems, the reason of selecting relation database management system (RDBMS), the principles of selecting RDBMS, the recommendation of ORACLE management system as the software to build database and so on

  8. Beyond Bundles - Reproducible Software Environments with GNU Guix

    CERN Multimedia

    CERN. Geneva; Wurmus, Ricardo

    2018-01-01

    Building reproducible data analysis pipelines and numerical experiments is a key challenge for reproducible science, in which tools to reproduce software environments play a critical role. The advent of “container-based” deployment tools such as Docker and Singularity has made it easier to replicate software environments. These tools are very much about bundling the bits of software binaries in a convenient way, not so much about describing how software is composed. Science is not just about replicating, though—it demands the ability to inspect and to experiment. In this talk we will present GNU Guix, a software management toolkit. Guix departs from container-based solutions in that it enables declarative composition of software environments. It is comparable to “package managers” like apt or yum, but with a significant difference: Guix provides accurate provenance tracking of build artifacts, and bit-reproducible software. We will illustrate the many ways in which Guix can improve how software en...

  9. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  10. An interactive end-user software application for a deep-sea photographic database

    Digital Repository Service at National Institute of Oceanography (India)

    Jaisankar, S.; Sharma, R.

    . The software is the first of its kind in deep-sea applications and it also attempts to educate the user about deep-sea photography. The application software is developed by modifying established routines and by creating new routines to save the retrieved...

  11. Nuclear database management systems

    International Nuclear Information System (INIS)

    Stone, C.; Sutton, R.

    1996-01-01

    The authors are developing software tools for accessing and visualizing nuclear data. MacNuclide was the first software application produced by their group. This application incorporates novel database management and visualization tools into an intuitive interface. The nuclide chart is used to access properties and to display results of searches. Selecting a nuclide in the chart displays a level scheme with tables of basic, radioactive decay, and other properties. All level schemes are interactive, allowing the user to modify the display, move between nuclides, and display entire daughter decay chains

  12. Survey and analyses of computer software usage in Calabar ...

    African Journals Online (AJOL)

    This work is to find out the most used software and the type of jobs mostly done. A descriptive analysis using simple percentages revealed that word processing software is the most used software followed by graphics, database and accounting in a decreasing order respectively. A comparative examination of the use of the ...

  13. What to Ask Women Composers: Feminist Fieldwork in Electronic Dance Music

    Directory of Open Access Journals (Sweden)

    Magdalena Olszanowski

    2012-11-01

    Full Text Available Normal 0 false false false EN-US JA X-NONE This article reflects upon the research methods employed for microfemininewarfare (2012, an interactive database documentary that investigates female electronic dance music (EDM artists. The purpose of the documentary is to feature the contributions of women as composers, to show how they came to be composers and to reveal the tactics used to approach significant issues of gender in the male-dominated world of EDM. I highlight the theoretical and methodological processes that went into the making of this documentary, subtitled “exploring women’s space in electronic music”. By constructing “electronic music by women” as a category, two objectives are addressed: first, the visibility of women’s contribution to the musical tradition is heightened; and, second, it allows an exploration of the broadening of discourses about female subjectivity. This article showcases feminist research-creation and friendship-as-method as effective research methods to glean meaningful content when applied to EDM fieldwork.

  14. Essential Features for a Scholarly Journal Content Management and Peer Review Software

    OpenAIRE

    Fatima Sheikh Shoaie; Mehdi Husseini

    2010-01-01

      The present study investigates the software used in scientific journals for content management and peer review, in order to identify the essential features. These softwares are analyzed and presented in tabular format. A questionnaire was prepared and submitted to a panel composed of 15 referees, editor in chief, software designers and researchers. The essential features for a software managing the review process were divided into three groups with populations of 10-15, 5-10 and 0-5 respect...

  15. Maintenance simulation: Software issues

    Energy Technology Data Exchange (ETDEWEB)

    Luk, C.H.; Jette, M.A.

    1995-07-01

    The maintenance of a distributed software system in a production environment involves: (1) maintaining software integrity, (2) maintaining and database integrity, (3) adding new features, and (4) adding new systems. These issues will be discussed in general: what they are and how they are handled. This paper will present our experience with a distributed resource management system that accounts for resources consumed, in real-time, on a network of heterogenous computers. The simulated environments to maintain this system will be presented relate to the four maintenance areas.

  16. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  17. Teaching Software Componentization: A Bar Chart Java Bean

    Science.gov (United States)

    Mitri, Michel

    2010-01-01

    In the current object-oriented paradigm, software construction increasingly involves creating and utilizing "software components". These components can serve a variety of functions, from common algorithmic processes to database connectivity to graphical interfaces. The advantage of component architectures is that programmers can use pre-existing…

  18. Harmonic Calculation Software for Industrial Applications with Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Asiminoaei, Lucian; Hansen, S.; Blaabjerg, Frede

    2005-01-01

    This paper describes the evaluation of a new harmonic software. By using a combination of a pre-stored database and new interpolation techniques the software can very fast provide the harmonic data on real applications. The harmonic results obtained with this software have acceptable precision even...

  19. The Danish Fetal Medicine Database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte K; Petersen, Olav B; Jørgensen, Finn S

    2015-01-01

    OBJECTIVE: To describe the establishment and organization of the Danish Fetal Medicine Database and to report national results of first-trimester combined screening for trisomy 21 in the 5-year period 2008-2012. DESIGN: National register study using prospectively collected first-trimester screening...... data from the Danish Fetal Medicine Database. POPULATION: Pregnant women in Denmark undergoing first-trimester screening for trisomy 21. METHODS: Data on maternal characteristics, biochemical and ultrasonic markers are continuously sent electronically from local fetal medicine databases (Astraia Gmbh...... software) to a central national database. Data are linked to outcome data from the National Birth Register, the National Patient Register and the National Cytogenetic Register via the mother's unique personal registration number. First-trimester screening data from 2008 to 2012 were retrieved. MAIN OUTCOME...

  20. Application Program Interface for the Orion Aerodynamics Database

    Science.gov (United States)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The

  1. Users' satisfaction with the use of electronic database in university ...

    African Journals Online (AJOL)

    Users' satisfaction with the use of electronic database in university libraries in north ... file of digitized information (bibliographic records, abstracts, full-text documents, ... managed with the aid of database management system (DBMS) software.

  2. Design and implement of BESIII online histogramming software

    International Nuclear Information System (INIS)

    Li Fei; Wang Liang; Liu Yingjie; Chinese Academy of Sciences, Beijing; Zhu Kejun; Zhao Jingwei

    2007-01-01

    The online histogramming software is an important part of the BESIII DAQ (Data Acquisition) system. This article introduces the main requirements and design of the online histogramming software and presents how to produce, transmit and gather histograms in the distributed environment in the current software implement. The article also illustrate one smart, simple and easy to expand way of setup with xml configure database. (authors)

  3. Software EpiData - Applications for Needs of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Ljakova K.

    2007-12-01

    Full Text Available EpiData (free software for entering and documenting data is presented. Some aspect of this software is shown for needs of database system (DB and information systems (IS that can be used in bioprocess system.

  4. Analyser Framework to Verify Software Components

    Directory of Open Access Journals (Sweden)

    Rolf Andreas Rasenack

    2009-01-01

    Full Text Available Today, it is important for software companies to build software systems in a short time-interval, to reduce costs and to have a good market position. Therefore well organized and systematic development approaches are required. Reusing software components, which are well tested, can be a good solution to develop software applications in effective manner. The reuse of software components is less expensive and less time consuming than a development from scratch. But it is dangerous to think that software components can be match together without any problems. Software components itself are well tested, of course, but even if they composed together problems occur. Most problems are based on interaction respectively communication. Avoiding such errors a framework has to be developed for analysing software components. That framework determines the compatibility of corresponding software components. The promising approach discussed here, presents a novel technique for analysing software components by applying an Abstract Syntax Language Tree (ASLT. A supportive environment will be designed that checks the compatibility of black-box software components. This article is concerned to the question how can be coupled software components verified by using an analyzer framework and determines the usage of the ASLT. Black-box Software Components and Abstract Syntax Language Tree are the basis for developing the proposed framework and are discussed here to provide the background knowledge. The practical implementation of this framework is discussed and shows the result by using a test environment.

  5. Planform: an application and database of graph-encoded planarian regenerative experiments.

    Science.gov (United States)

    Lobo, Daniel; Malone, Taylor J; Levin, Michael

    2013-04-15

    Understanding the mechanisms governing the regeneration capabilities of many organisms is a fundamental interest in biology and medicine. An ever-increasing number of manipulation and molecular experiments are attempting to discover a comprehensive model for regeneration, with the planarian flatworm being one of the most important model species. Despite much effort, no comprehensive, constructive, mechanistic models exist yet, and it is now clear that computational tools are needed to mine this huge dataset. However, until now, there is no database of regenerative experiments, and the current genotype-phenotype ontologies and databases are based on textual descriptions, which are not understandable by computers. To overcome these difficulties, we present here Planform (Planarian formalization), a manually curated database and software tool for planarian regenerative experiments, based on a mathematical graph formalism. The database contains more than a thousand experiments from the main publications in the planarian literature. The software tool provides the user with a graphical interface to easily interact with and mine the database. The presented system is a valuable resource for the regeneration community and, more importantly, will pave the way for the application of novel artificial intelligence tools to extract knowledge from this dataset. The database and software tool are freely available at http://planform.daniel-lobo.com.

  6. The ALADDIN atomic physics database system

    International Nuclear Information System (INIS)

    Hulse, R.A.

    1990-01-01

    ALADDIN is an atomic physics database system which has been developed in order to provide a broadly-based standard medium for the exchange and management of atomic data. ALADDIN consists of a data format definition together with supporting software for both interactive searches as well as for access to the data by plasma modeling and other codes. 8AB The ALADDIN system is designed to offer maximum flexibility in the choice of data representations and labeling schemes, so as to support a wide range of atomic physics data types and allow natural evolution and modification of the database as needs change. Associated dictionary files are included in the ALADDIN system for data documentation. The importance of supporting the widest possible user community was also central to be ALADDIN design, leading to the use of straightforward text files with concatentated data entries for the file structure, and the adoption of strict FORTRAN 77 code for the supporting software. This will allow ready access to the ALADDIN system on the widest range of scientific computers, and easy interfacing with FORTRAN modeling codes, user developed atomic physics codes and database, etc. This supporting software consists of the ALADDIN interactive searching and data display code, together with the ALPACK subroutine package which provides ALADDIN datafile searching and data retrieval capabilities to user's codes

  7. Database setup insuring radiopharmaceuticals traceability

    International Nuclear Information System (INIS)

    Robert, N.; Salmon, F.; Clermont-Gallerande, H. de; Celerier, C.

    2002-01-01

    Having to organize radiopharmacy and to insure proper traceability of radiopharmaceutical medicines brings numerous problems, especially for the departments which are not assisted with global management network systems. Our work has been to find a solution enabling to use high street software to cover those needs. We have set up a PC database run by the Microsoft software ACCESS 97. Its use consists in: saving data related to generators, isotopes and kits reception and deletion, as well as the results of quality control; transferring data collected from the software that is connected to the activimeter (elutions and preparations registers, prescription book). By relating all the saved data, ACCESS enables to mix all information in order to proceed requests. At this stage, it is possible to edit all regular registers (prescription book, generator and radionuclides follow-up, blood derived medicines traceability) and to quickly retrieve patients who have received a particular radiopharmaceutical, or the radiopharmaceutical that has been given to a particular patient. This user-friendly database provides a considerable support to nuclear medicine department that don't possess any network management for their radiopharmaceutical activity. (author)

  8. Composing the Curriculum: Teacher Identity

    Science.gov (United States)

    Lewis, Rebecca

    2012-01-01

    What is composing and how is it valued? What does a good education in composing look like; what constraints hinder it and is it possible to overcome such constraints? Can composing be a personal, creative and valuable activity for the school student? What role does the teacher play in all of this? These are questions that I discuss in this…

  9. Software Design Level Security Vulnerabilities

    OpenAIRE

    S. Rehman; K. Mustafa

    2011-01-01

    Several thousand software design vulnerabilities have been reported through established databases. But they need to be structured and classified to be optimally usable in the pursuit of minimal and effective mitigation mechanism. In order we developed a criterion set for a communicative description of the same to serve the purpose as a taxonomic description of security vulnerabilities, arising in the design phase of Software development lifecycle. This description is a part of an effort to id...

  10. A relational database for physical data from TJ-II discharges

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.B.; Vega, J.

    2002-01-01

    A relational database (RDB) has been developed for classifying TJ-II experimental data according to physical criteria. Two objectives have been achieved: the design and the implementation of the database and the software tools for data access depending on a single software driver. TJ-II data were arranged in several tables with a flexible design, speedy performance, efficient search capacity and adaptability to meet present and future, requirements. The software has been developed to allow the access to the TJ-II RDB from a variety of computer platforms (ALPHA AXP/True64 UNIX, CRAY/UNICOS, Intel Linux, Sparc/Solaris and Intel/Windows 95/98/NT) and programming languages (FORTRAN and C/C++). The database resides in a Windows NT Server computer and is managed by Microsoft SQL Server. The access software is based on open network computing remote procedure call and follows client/server model. A server program running in the Windows NT computer controls data access. Operations on the database (through a local ODBC connection) are performed according to predefined permission protocols. A client library providing a set of basic functions for data integration and retrieval has been built in both static and dynamic link versions. The dynamic version is essential in accessing RDB data from 4GL environments (IDL and PV-WAVE among others)

  11. Lecture 9: Oracle Databases at CERN

    CERN Multimedia

    CERN. Geneva; Limper, Maaike

    2013-01-01

    She participated in the analysis of the first LHC data in a variety of ways: she worked on the construction of the ATLAS silicon tracker, wrote new data reconstruction software and developed some of the databases that store information on the ATLAS data-taking conditions. As of January 2012, Maaike joined the CERN IT Databases group as a CERN openlab Fellow funded by Oracle to help investigate the possib...

  12. Analysis of Cloud-Based Database Systems

    Science.gov (United States)

    2015-06-01

    deploying the VM, we installed SQL Server 2014 relational database management software (RDBMS) and restored a copy of the PYTHON database onto the server ...management views within SQL Server , we retrieved lists of the most commonly executed queries, the percentage of reads versus writes, as well as...Monitor. This gave us data regarding resource utilization and queueing. The second tool we used was the SQL Server Profiler provided by Microsoft

  13. Methods and Software for Building Bibliographic Data Bases.

    Science.gov (United States)

    Daehn, Ralph M.

    1985-01-01

    This in-depth look at database management systems (DBMS) for microcomputers covers data entry, information retrieval, security, DBMS software and design, and downloading of literature search results. The advantages of in-house systems versus online search vendors are discussed, and specifications of three software packages and 14 sources are…

  14. Simulation software support (S3) system a software testing and debugging tool

    International Nuclear Information System (INIS)

    Burgess, D.C.; Mahjouri, F.S.

    1990-01-01

    The largest percentage of technical effort in the software development process is accounted for debugging and testing. It is not unusual for a software development organization to spend over 50% of the total project effort on testing. In the extreme, testing of human-rated software (e.g., nuclear reactor monitoring, training simulator) can cost three to five times as much as all other software engineering steps combined. The Simulation Software Support (S 3 ) System, developed by the Link-Miles Simulation Corporation is ideally suited for real-time simulation applications which involve a large database with models programmed in FORTRAN. This paper will focus on testing elements of the S 3 system. In this paper system support software utilities are provided which enable the loading and execution of modules in the development environment. These elements include the Linking/Loader (LLD) for dynamically linking program modules and loading them into memory and the interactive executive (IEXEC) for controlling the execution of the modules. Features of the Interactive Symbolic Debugger (SD) and the Real Time Executive (RTEXEC) to support the unit and integrated testing will be explored

  15. Database implementation to fluidized cracking catalytic-FCC process

    International Nuclear Information System (INIS)

    Santana, Antonio Otavio de; Dantas, Carlos Costa; Santos, Valdemir A. dos

    2009-01-01

    A process of Fluidized Cracking Catalytic (FCC) was developed by our research group. A cold model FCC unit, in laboratory scale, was used for obtaining of the data relative to the following parameters: air flow, system pressure, riser inlet pressure, rise outlet pressure, pressure drop in the riser, motor speed of catalyst injection and density. The measured of the density is made by gamma ray transmission. For the fact of the process of FCC not to have a database until then, the present work supplied this deficiency with the implementation of a database in connection with the Matlab software. The data from the FCC unit (laboratory model) are obtained as spreadsheet of the MS-Excel software. These spreadsheets were treated before importing them as database tables. The application of the process of normalization of database and the analysis done with the MS-Access in these spreadsheets treated revealed the need of an only relation (table) for to represent the database. The Database Manager System (DBMS) chosen has been the MS-Access by to satisfy our flow of data. The next step was the creation of the database, being built the table of data, the action query, selection query and the macro for to import data from the unit FCC in study. Also an interface between the application 'Database Toolbox' (Matlab2008a) and the database was created. This was obtained through the drivers ODBC (Open Data Base Connectivity). This interface allows the manipulation of the database by the users operating in the Matlab. (author)

  16. Database implementation to fluidized cracking catalytic-FCC process

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Antonio Otavio de; Dantas, Carlos Costa, E-mail: aos@ufpe.b [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear; Santos, Valdemir A. dos, E-mail: valdemir.alexandre@pq.cnpq.b [Universidade Catolica de Pernambuco, Recife, PE (Brazil). Centro de Ciencia e Tecnologia

    2009-07-01

    A process of Fluidized Cracking Catalytic (FCC) was developed by our research group. A cold model FCC unit, in laboratory scale, was used for obtaining of the data relative to the following parameters: air flow, system pressure, riser inlet pressure, rise outlet pressure, pressure drop in the riser, motor speed of catalyst injection and density. The measured of the density is made by gamma ray transmission. For the fact of the process of FCC not to have a database until then, the present work supplied this deficiency with the implementation of a database in connection with the Matlab software. The data from the FCC unit (laboratory model) are obtained as spreadsheet of the MS-Excel software. These spreadsheets were treated before importing them as database tables. The application of the process of normalization of database and the analysis done with the MS-Access in these spreadsheets treated revealed the need of an only relation (table) for to represent the database. The Database Manager System (DBMS) chosen has been the MS-Access by to satisfy our flow of data. The next step was the creation of the database, being built the table of data, the action query, selection query and the macro for to import data from the unit FCC in study. Also an interface between the application 'Database Toolbox' (Matlab2008a) and the database was created. This was obtained through the drivers ODBC (Open Data Base Connectivity). This interface allows the manipulation of the database by the users operating in the Matlab. (author)

  17. Construction of Database for Pulsating Variable Stars

    Science.gov (United States)

    Chen, B. Q.; Yang, M.; Jiang, B. W.

    2011-07-01

    A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.

  18. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    Science.gov (United States)

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  19. 77 FR 71089 - Pilot Loading of Aeronautical Database Updates

    Science.gov (United States)

    2012-11-29

    ...) card, rather than in resident memory. The database update was accomplished by removing the SD card with... frequency distance measuring equipment (DME), and any updates that affect system operating software--that... developed with attention to data integrity. Current technology uses databases which are developed in...

  20. Microcomputer Database Management Systems that Interface with Online Public Access Catalogs.

    Science.gov (United States)

    Rice, James

    1988-01-01

    Describes a study that assessed the availability and use of microcomputer database management interfaces to online public access catalogs. The software capabilities needed to effect such an interface are identified, and available software packages are evaluated by these criteria. A directory of software vendors is provided. (4 notes with…

  1. NUCDAS: a database of nuclear criticality data around the world

    International Nuclear Information System (INIS)

    Komuro, Yuichi; Sakai, Tomohiro

    1999-01-01

    The NUCDAS database, which contains great numbers of nuclear criticality data and subcritical limit data described in criticality safety handbooks of Japan and foreign countries, has been developed at JAERI. The database was designed to perform quick search on criticality data and subcritical limits and to draw their curves for comparison. So, criticality data among handbooks can be shown on the screen and/or printed on the paper. The database runs on the Apple Macintosh computer and written in 4th Dimension, a relational database software for the Macintosh. This tool provides powerful search and sort capabilities. An appropriate graphic software (e.g. KaleidaGraph) is used to draw a graph of selected criticality data. NUCDAS will be demonstrated in the poster presentation. NUCEF'98 participants who are interested in NUCDAS will be able to operate Macintosh with the database and will be encouraged to give us some comments on it for modifications. Though all messages on the screen are written in Japanese, don't worry. (author)

  2. NoSQL technologies for the CMS Conditions Database

    Science.gov (United States)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  3. Federated Database Services for Wind Tunnel Experiment Workflows

    Directory of Open Access Journals (Sweden)

    A. Paventhan

    2006-01-01

    Full Text Available Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.

  4. Data-base tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    The authors use a commercial data-base software package to create several data-base products that enhance the ability of experimental physicists to analyze data from the TMX-U experiment. This software resides on a Dec-20 computer in M-Divisions's user service center (USC), where data can be analyzed separately from the main acquisition computers. When these data-base tools are combined with interactive data analysis programs, physicists can perform automated (batch-style) processing or interactive data analysis on the computers in the USC or on the supercomputers of the NMFECC, in addition to the normal processing done on the acquisition system. One data-base tool provides highly reduced data for searching and correlation analysis of several diagnostic signals for a single shot or many shots. A second data-base tool provides retrieval and storage of unreduced data for detailed analysis of one or more diagnostic signals. The authors report how these data-base tools form the core of an evolving off-line data-analysis environment on the USC computers

  5. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  6. The GOLM-database standard- a framework for time-series data management based on free software

    Science.gov (United States)

    Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.

    2009-04-01

    Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.

  7. Quality assurance software inspections at NASA Ames: Metrics for feedback and modification

    Science.gov (United States)

    Wenneson, G.

    1985-01-01

    Software inspections are a set of formal technical review procedures held at selected key points during software development in order to find defects in software documents--is described in terms of history, participants, tools, procedures, statistics, and database analysis.

  8. OLIO+: an osteopathic medicine database.

    Science.gov (United States)

    Woods, S E

    1991-01-01

    OLIO+ is a bibliographic database designed to meet the information needs of the osteopathic medical community. Produced by the American Osteopathic Association (AOA), OLIO+ is devoted exclusively to the osteopathic literature. The database is available only by subscription through AOA and may be accessed from any data terminal with modem or IBM-compatible personal computer with telecommunications software that can emulate VT100 or VT220. Apple access is also available, but some assistance from OLIO+ support staff may be necessary to modify the Apple keyboard.

  9. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  10. A Relational Algebra Query Language for Programming Relational Databases

    Science.gov (United States)

    McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole

    2011-01-01

    In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…

  11. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

  12. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish "Pathology Registry", the "National Patient Registry", and the "Cause of Death Registry" using the unique...... Danish personal identification number (CPR number). DESCRIPTIVE DATA: Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation...

  13. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1983-01-01

    Database management techniques are applied to automating the setup of operating parameters of a heavy-ion accelerator used in nuclear physics experiments. Data files consist of ion-beam attributes, the interconnection assignments of the numerous power supplies and magnetic elements that steer the ions' path through the system, the data values that represent the electrical currents supplied by the power supplies, as well as the positions of motors and status of mechanical actuators. The database is relational and permits searching on ranges of any subset of the ion-beam attributes. A file selected from the database is used by the control software to replicate the ion beam conditions by adjusting the physical elements in a continuous manner

  14. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  15. Completion of autobuilt protein models using a database of protein fragments

    International Nuclear Information System (INIS)

    Cowtan, Kevin

    2012-01-01

    Two developments in the process of automated protein model building in the Buccaneer software are described: the use of a database of protein fragments in improving the model completeness and the assembly of disconnected chain fragments into complete molecules. Two developments in the process of automated protein model building in the Buccaneer software are presented. A general-purpose library for protein fragments of arbitrary size is described, with a highly optimized search method allowing the use of a larger database than in previous work. The problem of assembling an autobuilt model into complete chains is discussed. This involves the assembly of disconnected chain fragments into complete molecules and the use of the database of protein fragments in improving the model completeness. Assembly of fragments into molecules is a standard step in existing model-building software, but the methods have not received detailed discussion in the literature

  16. Computer-aided diagnosis system for bone scintigrams from Japanese patients: importance of training database

    DEFF Research Database (Denmark)

    Horikoshi, Hiroyuki; Kikuchi, Akihiro; Onoguchi, Masahisa

    2012-01-01

    higher performance than the corresponding CAD software trained with a European database for the analysis of bone scans from Japanese patients. These results could at least partly be caused by the physical differences between Japanese and European patients resulting in less influence of attenuation......Computer-aided diagnosis (CAD) software for bone scintigrams have recently been introduced as a clinical quality assurance tool. The purpose of this study was to compare the diagnostic accuracy of two CAD systems, one based on a European and one on a Japanese training database, in a group of bone...... scans from Japanese patients.The two CAD software are trained to interpret bone scans using training databases consisting of bone scans with the desired interpretation, metastatic disease or not. One software was trained using 795 bone scans from European patients and the other with 904 bone scans from...

  17. A unique database for gathering data from a mobile app and medical prescription software: a useful data source to collect and analyse patient-reported outcomes of depression and anxiety symptoms.

    Science.gov (United States)

    Watanabe, Yoshinori; Hirano, Yoko; Asami, Yuko; Okada, Maki; Fujita, Kazuya

    2017-11-01

    A unique database named 'AN-SAPO' was developed by Iwato Corp. and Japan Brain Corp. in collaboration with the psychiatric clinics run by Himorogi Group in Japan. The AN-SAPO database includes patients' depression/anxiety score data from a mobile app named AN-SAPO and medical records from medical prescription software named 'ORCA'. On the mobile app, depression/anxiety severity can be evaluated by answering 20 brief questions and the scores are transferred to the AN-SAPO database together with the patients' medical records on ORCA. Currently, this database is used at the Himorogi Group's psychiatric clinics and has over 2000 patients' records accumulated since November 2013. Since the database covers patients' demographic data, prescribed drugs, and the efficacy and safety information, it could be a useful supporting tool for decision-making in clinical practice. We expect it to be utilised in wider areas of medical fields and for future pharmacovigilance and pharmacoepidemiological studies.

  18. Simplified validation of borderline hits of database searches

    OpenAIRE

    Thomas, Henrik; Shevchenko, Andrej

    2008-01-01

    Along with unequivocal hits produced by matching multiple MS/MS spectra to database sequences, LC-MS/MS analysis often yields a large number of hits of borderline statistical confidence. To simplify their validation, we propose to use rapid de novo interpretation of all acquired MS/MS spectra and, with the help of a simple software tool, display the candidate sequences together with each database search hit. We demonstrate that comparing hit database sequences and independent de novo interpre...

  19. Query Processing and Interlinking of Fuzzy Object-Oriented Database

    OpenAIRE

    Shweta Dwivedi; Santosh Kumar

    2017-01-01

    Due to the many limitation and poor data handling in the existing relational database, the software professional and researchers moves towards the object-oriented database which has much better capability to handling the real and complex real world data i.e. clear and crisp data and also have the capability to perform some huge and complex queries in an effective manner. On the other hand, a new approach in database is introduced named as Fuzzy Object-Oriented Database (FOOD); it has all the ...

  20. Functional modelling for integration of human-software-hardware in complex physical systems

    International Nuclear Information System (INIS)

    Modarres, M.

    1996-01-01

    A framework describing the properties of complex physical systems composed of human-software-hardware interactions in terms of their functions is described. It is argued that such a framework is domain-general, so that functional primitives present a language that is more general than most other modeling methods such as mathematical simulation. The characteristics and types of functional models are described. Examples of uses of the framework in modeling physical systems composed of human-software-hardware (hereby we refer to them as only physical systems) are presented. It is concluded that a function-centered model of a physical system provides a capability for generating a high-level simulation of the system for intelligent diagnostic, control or other similar applications

  1. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  2. Reconfiguring film studies through software cinema and procedural spectatorship

    Directory of Open Access Journals (Sweden)

    Marina Hassapopoulou

    2014-12-01

    Full Text Available The increasing use of software and database aesthetics in film and video production has created hybrid modes of spectatorship by altering the dynamic between media production and reception. Software-generated narratives (preprogrammed databases that create films through random selection and combination of discrete audio, visual, and/or textual tracks remove the viewer from the actual algorithmic process, drawing his/her attention instead on interactions between hardware and software. Here, the element of unpredictability that is part of cinematic pleasure lies in the recombination of discrete elements (audio, visuals, subtitles, and so on and the unexpected ways in which the software stitches those elements together. The subsequent reduction in the degree and compass of authorial control invites us to reconsider existing frameworks of spectatorship and narration within new contexts of mobility, performance, and databases. In this article I consider Soft Cinema films (Lev Manovich, Andreas Kratky, et al., 2003 as prototypical software-driven examples of this shift in viewing conditions and reception contexts. I argue that, despite its emerging and changing techniques and aesthetics, software-generated cinema retains one of the primitive socio-pedagogical functions of the cinema: training audiences to receive and buffer contemporary medial sensations. Just as early cinema prepared audiences and worked as a buffer for shocks of technological and industrial modernity, software cinema trains the viewer in new modes of film spectatorship and new modes of narrative and affective subjectivity that correspond to the hypertextual ways in which we interact with digital technologies. These viewing modes create a new form of procedural spectatorship that has been evident since the first pioneering experiments in generative cinema and a form that is, nonetheless, not entirely detached from existing theoretical paradigms of cinematic spectatorship and the

  3. ESPSD, Nuclear Power Plant Siting Database

    International Nuclear Information System (INIS)

    Slezak, S.

    2001-01-01

    1 - Description of program or function: This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522), Sub-parts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied data by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS. 2 - Method of solution: The database is an R:BASE Runtime program with all the necessary database files included

  4. Development of a fatigue analysis software system

    International Nuclear Information System (INIS)

    Choi, B. I.; Lee, H. J.; Han, S. W.; Kim, J. Y.; Hwang, K. H.; Kang, J. Y.

    2001-01-01

    A general purpose fatigue analysis software to predict fatigue lives of mechanical components and structures was developed. This software has some characteristic features including functions of searching weak regions on the free surface in order to reduce computing time significantly, a database of fatigue properties for various materials, and an expert system which can assist any users to get more proper results. This software can be used in the environment consists of commercial finite element packages. Using the software developed fatigue analyses for a SAE keyhole specimen and an automobile knuckle were carried out. It was observed that the results were agree well with those from commercial packages

  5. Analysis, Design and Implementation of a Web Database With Oracle 8I

    National Research Council Canada - National Science Library

    Demiryurek, Ugur

    2001-01-01

    ....O served as the OS environment From the technical aspect, Database Management Systems, Web-Database Architectures, Server Extension Programs, Oracle8i as well as several other software and hardware...

  6. An analysis software of tritium distribution in food and environmental water in China

    International Nuclear Information System (INIS)

    Li Wenhong; Xu Cuihua; Ren Tianshan; Deng Guilong

    2006-01-01

    Objective: The purpose of developing this analysis-software of tritium distribution in food and environmental water is to collect tritium monitoring data, to analyze the data, both automatically, statistically and graphically, and to study and share the data. Methods: Based on the data obtained before, analysis-software is wrote by using VC++. NET as tool software. The software first transfers data from EXCEL into a database. It has additive function of data-append, so operators can embody new monitoring data easily. Results: After turning the monitoring data saved as EXCEL file by original researchers into a database, people can easily access them. The software provides a tool of distributing-analysis of tritium. Conclusion: This software is a first attempt of data-analysis about tritium level in food and environmental water in China. Data achieving, searching and analyzing become easily and directly with the software. (authors)

  7. "On-screen" writing and composing: two years experience with Manuscript Manager, Apple II and IBM-PC versions.

    Science.gov (United States)

    Offerhaus, L

    1989-06-01

    The problems of the direct composition of a biomedical manuscript on a personal computer are discussed. Most word processing software is unsuitable because literature references, once stored, cannot be rearranged if major changes are necessary. These obstacles have been overcome in Manuscript Manager, a combination of word processing and database software. As it follows Council of Biology Editors and Vancouver rules, the printouts should be technically acceptable to most leading biomedical journals.

  8. Implementation of dragon-I database system based on B/S model

    International Nuclear Information System (INIS)

    Jiang Wei; Lai Qinggui; Chen Nan; Gao Feng

    2010-01-01

    B/S architecture is utilized in the database system of 'Dragon-I'. The dynamic web software is designed with the technology of ASP. NET, and the web software are divided into three main tiers: user interface tier, business logic tier and access tier. The data of accelerator status and the data generated in experiment processes are managed with SQL Server DBMS, and the database is accessed based on the technology of ADO. NET. The status of facility, control parameters and testing waves are queried by the experiment number and experiment time. The demand of storage, management, browse, query and offline analysis are implemented entirely in this database system based on B/S architecture. (authors)

  9. Automated testing of arrhythmia monitors using annotated databases.

    Science.gov (United States)

    Elghazzawi, Z; Murray, W; Porter, M; Ezekiel, E; Goodall, M; Staats, S; Geheb, F

    1992-01-01

    Arrhythmia-algorithm performance is typically tested using the AHA and MIT/BIH databases. The tools for this test are simulation software programs. While these simulations provide rapid results, they neglect hardware and software effects in the monitor. To provide a more accurate measure of performance in the actual monitor, a system has been developed for automated arrhythmia testing. The testing system incorporates an IBM-compatible personal computer, a digital-to-analog converter, an RS232 board, a patient-simulator interface to the monitor, and a multi-tasking software package for data conversion and communication with the monitor. This system "plays" patient data files into the monitor and saves beat classifications in detection files. Tests were performed using the MIT/BIH and AHA databases. Statistics were generated by comparing the detection files with the annotation files. These statistics were marginally different from those that resulted from the simulation. Differences were then examined. As expected, the differences were related to monitor hardware effects.

  10. Common software for controlling and monitoring the upgraded CMS Level-1 trigger

    CERN Document Server

    Codispoti, Giuseppe

    2017-01-01

    The Large Hadron Collider restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The instantaneous luminosity is expected to increase significantly in the coming years. An upgraded Level-1 trigger system was deployed in the CMS experiment in order to maintain the same efficiencies for searches and precision measurements as those achieved in 2012. This system must be controlled and monitored coherently through software, with high operational efficiency.The legacy system was composed of a large number of custom data processor boards; correspondingly, only a small fraction of the software was common between the different subsystems. The upgraded system is composed of a set of general purpose boards, that follow the MicroTCA specification, and transmit data over optical links, resulting in a more homogeneous system. The associated software is based on generic components corresponding to the firmware blocks that are shared across different cards, regardless of the role that the card plays in the system. ...

  11. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Divisions's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed off line from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving off-line data analysis environment on the USC computers

  12. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  13. Database design for Physical Access Control System for nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sathishkumar, T., E-mail: satishkumart@igcar.gov.in; Rao, G. Prabhakara, E-mail: prg@igcar.gov.in; Arumugam, P., E-mail: aarmu@igcar.gov.in

    2016-08-15

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  14. Database design for Physical Access Control System for nuclear facilities

    International Nuclear Information System (INIS)

    Sathishkumar, T.; Rao, G. Prabhakara; Arumugam, P.

    2016-01-01

    Highlights: • Database design needs to be optimized and highly efficient for real time operation. • It requires a many-to-many mapping between Employee table and Doors table. • This mapping typically contain thousands of records and redundant data. • Proposed novel database design reduces the redundancy and provides abstraction. • This design is incorporated with the access control system developed in-house. - Abstract: A (Radio Frequency IDentification) RFID cum Biometric based two level Access Control System (ACS) was designed and developed for providing access to vital areas of nuclear facilities. The system has got both hardware [Access controller] and software components [server application, the database and the web client software]. The database design proposed, enables grouping of the employees based on the hierarchy of the organization and the grouping of the doors based on Access Zones (AZ). This design also illustrates the mapping between the Employee Groups (EG) and AZ. By following this approach in database design, a higher level view can be presented to the system administrator abstracting the inner details of the individual entities and doors. This paper describes the novel approach carried out in designing the database of the ACS.

  15. Software design for the EBT-P data acquisition and control system R and D

    International Nuclear Information System (INIS)

    Boyd, R.A.

    1983-01-01

    The instrumentation and control system for the EBT-P device is composed of a hierarchy of programmable logic controllers, microprocessor-based data acquisition computers, and a large minicomputer-based facility computer system. The software being developed to support this data acquisition and control system is necessarily quite complex due to several requirements imposed upon the EBT-P overall design criteria. These requirements, which include such considerations as overall reliability, operator interface, real-time display, interprocessor communication, and minimum cost to build, operate, and maintain, dictate that the software be developed in a well structured and controlled manner. To this end, structured software engineering practices are being applied to the design and development of the EBT-P data acquistion and control software. The design process began with the production of a software Requirements Document which describes the hardware and software environment in which the software development takes place. It identifies the major deliverable software items to be produced and describes the practices to be used to design and develop the software. The software design is split into three components: the facility computer software, the microcomputer software, and the PLC software. Within these physical boundaries, the following five functions are defined: data acquisition, display, communication, storage, and control. The software design is further detailed in a Structured Specification Document for each of the three physical components. Each specification describes the software in detailed terms so that a programmer can directly write the required software. Each specification is composed of: data flow diagrams, a data dictionary, structure diagrams, and program design language mini-specifications. Examples of the design issues exposed and addressed during the structured decomposition of EBT-P software processes are discussed in detail

  16. An expert system based software sizing tool, phase 2

    Science.gov (United States)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  17. Software for improved field surveys of nesting marine turtles.

    Science.gov (United States)

    Anastácio, R; Gonzalez, J M; Slater, K; Pereira, M J

    2017-09-07

    Field data are still recorded on paper in many worldwide beach surveys of nesting marine turtles. The data must be subsequently transferred into an electronic database, and this can introduce errors in the dataset. To minimize such errors, the "Turtles" software was developed and piloted to record field data by one software user accompanying one Tortuguero in Akumal beaches, Quintana Roo, Mexico, from June 1 st to July 31 st during the night patrols. Comparisons were made between exported data from the software with the paper forms entered into a database (henceforth traditional). Preliminary assessment indicated that the software user tended to record a greater amount of metrics (i.e., an average of 18.3 fields ± 5.4 sd vs. 8.6 fields ± 2.1 sd recorded by the traditional method). The traditional method introduce three types of "errors" into a dataset: missing values in relevant fields (40.1%), different answers for the same value (9.8%), and inconsistent data (0.9%). Only 5.8% of these (missing values) were found with the software methodology. Although only tested by a single user, the software may suggest increased efficacy and warrants further examination to accurately assess the merit of replacing traditional methods of data recording for beach monitoring programmes.

  18. Nuclear Material Accounting and Reporting Software for India

    International Nuclear Information System (INIS)

    Sankaran Nair, P.; Gangotra, S.; Chebolu, S.V.; Karanam, R.

    2015-01-01

    India has an item specific Safeguards Agreement, INFCIRC/754 with the IAEA and a nuclear material accounting system which generates the monthly reports for the Agency promptly. Subsequent to entry into force of subsidiary arrangements to INFCIRC/754 and as a part of implementation of new reporting Structure, India is developing new software to cater to its NUMAC reporting requirement. This paper gives the details of the software wherein the requirement of reporting for each of its facility to the Nuclear Controls & Planning Wing (NCPW) and the State level report to IAEA. The software is being developed on Linux (Ubuntu) OS, Mysql database and PHP. All the components are based on open source software and is developed as a two module system. The first module is for facility and the second one is for State level reports. The application has multi-level security for both the modules. Additionally, the facility level module is hardware interlocked. The facility reporter module generates a pdf file for the facility authority to sign, authenticate and hard copy filing. It can generate another xml file with an encryption, which can be sent to the State authority. In the State level module, the State authority generates reports for the Agency from xml file so received (after decryption and verification with the facility after receiving the signed hard copy) and also appends to its national database with all the information. The National database has all the information whereas the facility database has local information. The State module, in turn generates a pdf file, authenticate the same with signature of the authorized signatory (either in hardcopy form or in electronic form with PGP encryption as required by IAEA) and sends the same to the Agency. All the necessary security precautions to protect the entire NUMAC and safeguards information. (author)

  19. The Human Communication Research Centre dialogue database.

    Science.gov (United States)

    Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J

    1992-10-01

    The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.

  20. Trends in Literacy Software Publication and Marketing: Multicultural Themes.

    Science.gov (United States)

    Balajthy, Ernest

    This article provides data and discussion of multicultural theme-related issues arising from analysis of a detailed database of commercial software products targeted to reading and literacy education. The database consisted of 1152 titles, representing the offerings of 104 publishers and distributors. Of the titles, 62 were identified as having…

  1. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.

    Science.gov (United States)

    Silva, Ikaro; Moody, George B

    The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  2. Construction of the Database for Pulsating Variable Stars

    Science.gov (United States)

    Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei

    2012-01-01

    A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.

  3. Software for airborne radiation monitoring system

    International Nuclear Information System (INIS)

    Sheinfeld, M.; Kadmon, Y.; Tirosh, D.; Elhanany, I.; Gabovitch, A.; Barak, D.

    1997-01-01

    The Airborne Radiation Monitoring System monitors radioactive contamination in the air or on the ground. The contamination source can be a radioactive plume or an area contaminated with radionuclides. This system is composed of two major parts: Airborne Unit carried by a helicopter, and Ground Station carried by a truck. The Airborne software is intended to be the core of a computerized airborne station. The software is written in C++ under MS-Windows with object-oriented methodology. It has been designed to be user-friendly: function keys and other accelerators are used for vital operations, a help file and help subjects are available, the Human-Machine-Interface is plain and obvious. (authors)

  4. Software management of the LHC Detector Control Systems

    CERN Document Server

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  5. Software management of the LHC detector control systems

    CERN Document Server

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  6. Demonstrating the Open Data Repository's Data Publisher: The CheMin Database

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Bristow, T.; Pires, A.; Keller, R. M.; Downs, R. T.; Blake, D.; Dateo, C. E.; Fonda, M.

    2018-04-01

    The Open Data Repository's Data Publisher aims to provide an easy-to-use software tool that will allow researchers to create and publish database templates and related data. The CheMin Database developed using this framework is shown as an example.

  7. Just-in-time Database-Driven Web Applications

    Science.gov (United States)

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  8. Development of a software for the curimeter model cdn102

    International Nuclear Information System (INIS)

    Dotres Llera, Armando

    2001-01-01

    The characteristics of the software for the Curimeter Model CD-N102 developed at CEADEN are presented. The software consists of two main parts: a basic software for the electrometer block and an application software for a P C. The basic software is totally independent of the Pc and performs all the basic functions of the process of measurement. The application software is optional and offers a friendlier interface and additional options to the user. Among these is the possibility to keep a statistical record of the measurements in a database, to create labels and to introduce new isotopes and calibrate them. A more detailed explanation of both software is given

  9. Simulation of pellet-cladding interaction with the Pleiades fuel performance software environment

    International Nuclear Information System (INIS)

    Michel, B.; Nonon, C.; Sercombe, J.; Michel, F.; Marelle, V.

    2013-01-01

    This paper focuses on the PLEIADES fuel performance software environment and its application to the modeling of pellet-cladding interaction (PCI). The PLEIADES platform has been under development for 10 yr; a unified software environment, including the multidimensional finite element solver CAST3M, has been used to develop eight computation schemes now under operation. Among the latter, the ALCYONE application is devoted to pressurized water reactor fuel rod behavior. This application provides a three-dimensional (3-D) model for a detailed analysis of fuel element behavior and enables validation through comparing simulation and post-irradiation examination results (cladding residual diameter and ridges, dishing filling, pellet cracking, etc.). These last years the 3-D computation scheme of the ALCYONE application has been enriched with a complete set of physical models to take into account thermomechanical and chemical-physical behavior of the fuel element under irradiation. These models have been validated through the ALCYONE application on a large experimental database composed of approximately 400 study cases. The strong point of the ALCYONE application concerns the local approach of stress-corrosion-cracking rupture under PCI, which can be computed with the 3-D finite element solver. Further developments for PCI modeling in the PLEIADES platform are devoted to a new mesh refinement method for assessing stress-and-strain concentration (multigrid technique) and a new component for assessing fission product chemical recombination. (authors)

  10. Software for MUF evaluating in item nuclear material accounting

    International Nuclear Information System (INIS)

    Wang Dong; Zhang Quanhu; He Bin; Wang Hua; Yang Daojun

    2009-01-01

    Nuclear material accounting is a key measure for nuclear safeguard. Software for MUF evaluation in item nuclear material accounting was worked out in this paper. It is composed of several models, including input model, data processing model, data inquiring model, data print model, system setting model etc. It could be used to check the variance of the measurement and estimate the confidence interval according to the MUF value. To insure security of the data multi-user management function was applied in the software. (authors)

  11. A59 waste repackaging database (AWARD)

    International Nuclear Information System (INIS)

    Keel, A.

    1993-06-01

    This document describes the software modules to be implemented to provide the user interface for the A59 Waste Repackaging Database (AWARD). The modules will consist of a front end menu with options giving access to the various screen forms and printed reports. (Author)

  12. Experimental analysis of specification language diversity impact on NPP software diversity

    International Nuclear Information System (INIS)

    Yoo, Chang Sik

    1999-02-01

    In order to increase computer system reliability, software fault tolerance methods have been adopted to some safety critical systems including NPP. Prevention of software common mode failure is very crucial problem in software fault tolerance, but the effective method for this problem is not found yet. In our research, to find out an effective method for prevention of software common mode failure, the impact of specification language diversity on NPP software diversity was examined experimentally. Three specification languages were used to compose three requirements specifications, and programmers made twelve product codes from the specifications. From the product codes analysis, using fault diversity criteria, we concluded that diverse specification language method would enhance program diversity through diversification of requirements specification imperfections

  13. Software safety analysis on the model specified by NuSCR and SMV input language at requirements phase of software development life cycle using SMV

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2005-01-01

    Safety-critical software process is composed of development process, verification and validation (V and V) process and safety analysis process. Safety analysis process has been often treated as an additional process and not found in a conventional software process. But software safety analysis (SSA) is required if software is applied to a safety system, and the SSA shall be performed independently for the safety software through software development life cycle (SDLC). Of all the phases in software development, requirements engineering is generally considered to play the most critical role in determining the overall software quality. NASA data demonstrate that nearly 75% of failures found in operational software were caused by errors in the requirements. The verification process in requirements phase checks the correctness of software requirements specification, and the safety analysis process analyzes the safety-related properties in detail. In this paper, the method for safety analysis at requirements phase of software development life cycle using symbolic model verifier (SMV) is proposed. Hazard is discovered by hazard analysis and in other to use SMV for the safety analysis, the safety-related properties are expressed by computation tree logic (CTL)

  14. An image database structure for pediatric radiology

    International Nuclear Information System (INIS)

    Mankovich, N.J.

    1987-01-01

    The operation of the Clinical Radiology Imaging System (CRIS) in Pediatric Radiology at UCLA relies on the orderly flow of text and image data among the three basic subsystems including acquisition, storage, and display. CRIS provides the radiologist, clinician, and technician with data at clinical image workstations by maintaining comprehensive database. CRIS is made up of sub-systems, each composed of one more programs or tasks which operate in parallel on a VAX-11/750 microcomputer in Pediatric Radiology. Tasks are coordinated through dynamic data structures that include system event flags and disk-resident queues. This report outlines: (1) the CRIS data model, (2) the flow of information among CRIS components, (3) the underlying database structures which support the acquisition, display, and storage of text and image information, and (4) current database statistics

  15. The art and ‘science’ of opera: composing, staging & designing new forms of interactive theatrical performance

    OpenAIRE

    Chamberlain, Alan; Kallionpää, Maria; Benford, Steve

    2017-01-01

    New technologies, such as Virtual Reality (VR), Robotics and Artificial Intelligence (AI) are steadily having an impact upon the world of opera. The evolving use of performance-based software such as Ableton Live and Max/MSP has created new and exciting compositional techniques that intertwine theatrical and musical performance. This poster presents some initial work on the development of an opera using such technologies that is being composed by Kallionpää and Chamberlain.

  16. Human Ageing Genomic Resources: new and updated databases

    Science.gov (United States)

    Tacutu, Robi; Thornton, Daniel; Johnson, Emily; Budovsky, Arie; Barardo, Diogo; Craig, Thomas; Diana, Eugene; Lehmann, Gilad; Toren, Dmitri; Wang, Jingwei; Fraifeld, Vadim E

    2018-01-01

    Abstract In spite of a growing body of research and data, human ageing remains a poorly understood process. Over 10 years ago we developed the Human Ageing Genomic Resources (HAGR), a collection of databases and tools for studying the biology and genetics of ageing. Here, we present HAGR’s main functionalities, highlighting new additions and improvements. HAGR consists of six core databases: (i) the GenAge database of ageing-related genes, in turn composed of a dataset of >300 human ageing-related genes and a dataset with >2000 genes associated with ageing or longevity in model organisms; (ii) the AnAge database of animal ageing and longevity, featuring >4000 species; (iii) the GenDR database with >200 genes associated with the life-extending effects of dietary restriction; (iv) the LongevityMap database of human genetic association studies of longevity with >500 entries; (v) the DrugAge database with >400 ageing or longevity-associated drugs or compounds; (vi) the CellAge database with >200 genes associated with cell senescence. All our databases are manually curated by experts and regularly updated to ensure a high quality data. Cross-links across our databases and to external resources help researchers locate and integrate relevant information. HAGR is freely available online (http://genomics.senescence.info/). PMID:29121237

  17. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  18. Database architectures for Space Telescope Science Institute

    Science.gov (United States)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  19. Research and Development on Food Nutrition Statistical Analysis Software System

    OpenAIRE

    Du Li; Ke Yun

    2013-01-01

    Designing and developing a set of food nutrition component statistical analysis software can realize the automation of nutrition calculation, improve the nutrition processional professional’s working efficiency and achieve the informatization of the nutrition propaganda and education. In the software development process, the software engineering method and database technology are used to calculate the human daily nutritional intake and the intelligent system is used to evaluate the user’s hea...

  20. The ATLAS conditions database architecture for the Muon spectrometer

    International Nuclear Information System (INIS)

    Verducci, Monica

    2010-01-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  1. The ATLAS conditions database architecture for the Muon spectrometer

    Science.gov (United States)

    Verducci, Monica; ATLAS Muon Collaboration

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  2. Hanford Site technical baseline database. Revision 1

    International Nuclear Information System (INIS)

    Porter, P.E.

    1995-01-01

    This report lists the Hanford specific files (Table 1) that make up the Hanford Site Technical Baseline Database. Table 2 includes the delta files that delineate the differences between this revision and revision 0 of the Hanford Site Technical Baseline Database. This information is being managed and maintained on the Hanford RDD-100 System, which uses the capabilities of RDD-100, a systems engineering software system of Ascent Logic Corporation (ALC). This revision of the Hanford Site Technical Baseline Database uses RDD-100 version 3.0.2.2 (see Table 3). Directories reflect those controlled by the Hanford RDD-100 System Administrator. Table 4 provides information regarding the platform. A cassette tape containing the Hanford Site Technical Baseline Database is available

  3. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave

    Directory of Open Access Journals (Sweden)

    Ikaro Silva

    2014-09-01

    Full Text Available The WaveForm DataBase (WFDB Toolbox for MATLAB/Octave enables  integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox allows direct loading into MATLAB/Octave's workspace of over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by meta data such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  4. Testing methodology of embedded software in digital plant protection system

    International Nuclear Information System (INIS)

    Seong, Ah Young; Choi, Bong Joo; Lee, Na Young; Hwang, Il Soon

    2001-01-01

    It is necessary to assure the reliability of software in order to digitalize RPS(Reactor Protection System). Since RPS causes fatal damage on accidental cases, it is classified as Safety 1E class. Therefore we propose the effective testing methodology to assure the reliability of embedded software in the DPPS(Digital Plant Protection System). To test the embedded software effectively in DPPS, our methodology consists of two steps. The first is the re-engineering step that extracts classes from structural source program, and the second is the level of testing step which is composed of unit testing, Integration Testing and System Testing. On each testing step we test the embedded software with selected test cases after the test item identification step. If we use this testing methodology, we can test the embedded software effectively by reducing the cost and the time

  5. Implementation of a data management software system for SSME test history data

    Science.gov (United States)

    Abernethy, Kenneth

    1986-01-01

    The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.

  6. The HITRAN 2008 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Gordon, I.E.; Barbe, A.; Benner, D.Chris; Bernath, P.F.; Birk, M.; Boudon, V.; Brown, L.R.; Campargue, A.; Champion, J.-P.; Chance, K.; Coudert, L.H.; Dana, V.; Devi, V.M.; Fally, S.; Flaud, J.-M.

    2009-01-01

    This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e. spectra in which the individual lines are not resolved; individual line parameters and absorption cross-sections for bands in the ultraviolet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 42 molecules including many of their isotopologues.

  7. A database and API for variation, dense genotyping and resequencing data

    Directory of Open Access Journals (Sweden)

    Flicek Paul

    2010-05-01

    Full Text Available Abstract Background Advances in sequencing and genotyping technologies are leading to the widespread availability of multi-species variation data, dense genotype data and large-scale resequencing projects. The 1000 Genomes Project and similar efforts in other species are challenging the methods previously used for storage and manipulation of such data necessitating the redesign of existing genome-wide bioinformatics resources. Results Ensembl has created a database and software library to support data storage, analysis and access to the existing and emerging variation data from large mammalian and vertebrate genomes. These tools scale to thousands of individual genome sequences and are integrated into the Ensembl infrastructure for genome annotation and visualisation. The database and software system is easily expanded to integrate both public and non-public data sources in the context of an Ensembl software installation and is already being used outside of the Ensembl project in a number of database and application environments. Conclusions Ensembl's powerful, flexible and open source infrastructure for the management of variation, genotyping and resequencing data is freely available at http://www.ensembl.org.

  8. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    Science.gov (United States)

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  9. DATABASES DEVELOPED IN INDIA FOR BIOLOGICAL SCIENCES

    Directory of Open Access Journals (Sweden)

    Gitanjali Yadav

    2017-09-01

    Full Text Available The complexity of biological systems requires use of a variety of experimental methods with ever increasing sophistication to probe various cellular processes at molecular and atomic resolution. The availability of technologies for determining nucleic acid sequences of genes and atomic resolution structures of biomolecules prompted development of major biological databases like GenBank and PDB almost four decades ago. India was one of the few countries to realize early, the utility of such databases for progress in modern biology/biotechnology. Department of Biotechnology (DBT, India established Biotechnology Information System (BTIS network in late eighties. Starting with the genome sequencing revolution at the turn of the century, application of high-throughput sequencing technologies in biology and medicine for analysis of genomes, transcriptomes, epigenomes and microbiomes have generated massive volumes of sequence data. BTIS network has not only provided state of the art computational infrastructure to research institutes and universities for utilizing various biological databases developed abroad in their research, it has also actively promoted research and development (R&D projects in Bioinformatics to develop a variety of biological databases in diverse areas. It is encouraging to note that, a large number of biological databases or data driven software tools developed in India, have been published in leading peer reviewed international journals like Nucleic Acids Research, Bioinformatics, Database, BMC, PLoS and NPG series publication. Some of these databases are not only unique, they are also highly accessed as reflected in number of citations. Apart from databases developed by individual research groups, BTIS has initiated consortium projects to develop major India centric databases on Mycobacterium tuberculosis, Rice and Mango, which can potentially have practical applications in health and agriculture. Many of these biological

  10. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Science.gov (United States)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  11. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Tomi Kauppi

    2013-01-01

    Full Text Available We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions.

  12. Mining dynamic noteworthy functions in software execution sequences.

    Science.gov (United States)

    Zhang, Bing; Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong

    2017-01-01

    As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely.

  13. Reliability databases: State-of-the-art and perspectives

    DEFF Research Database (Denmark)

    Akhmedjanov, Farit

    2001-01-01

    The report gives a history of development and an overview of the existing reliability databases. This overview also describes some other (than computer databases) sources of reliability and failures information, e.g. reliability handbooks, but the mainattention is paid to standard models...... and software packages containing the data mentioned. The standards corresponding to collection and exchange of reliability data are observed too. Finally, perspective directions in such data sources development areshown....

  14. Formulation of price strategies in the software sector: outsourcing of development and maintenance software product case

    Directory of Open Access Journals (Sweden)

    Antonio Cezar Bornia

    2008-07-01

    Full Text Available The main goal of this article is to discuss the formulation of price strategies in the software sector. In the intention of reaching the proposed goal, strategies models of prices are introduced along with the procedure to the formulation of price strategies, composed by five stages: external and internal analyses, consolidation, positioning, price strategy formalization and market attendance. As for the methodology, the study is classified as qualitative, exploratory, descriptive, documental, of field and case study, according to the approach of Vergara (1998. In the case study, the model to the formulation of price strategies is applied in a company’s software sector, being analyzed the outsourcing of development and maintenance software product. As main contributions, it is highlighted the price procedure application that emphasizes strategic price logic and prices strategies formulations, with base in the analysis of five main factors: quality, comparison with the competition, company life cycle, product life cycle and characteristics of the segment-objective. Based on the analyzed factors, a possible strategy to be adopted considering the characteristics of the product and the company is the price strategy and superior value. Key-words: Pricing Strategies. Price Formulation. Software Enterprises.

  15. Image storage, cataloguing and retrieval using a personal computer database software application

    International Nuclear Information System (INIS)

    Lewis, G.; Howman-Giles, R.

    1999-01-01

    Full text: Interesting images and cases are collected and collated by most nuclear medicine practitioners throughout the world. Changing imaging technology has altered the way in which images may be presented and are reported, with less reliance on 'hard copy' for both reporting and archiving purposes. Digital image generation and storage is rapidly replacing film in both radiological and nuclear medicine practice. A personal computer database based interesting case filing system is described and demonstrated. The digital image storage format allows instant access to both case information (e.g. history and examination, scan report or teaching point) and the relevant images. The database design allows rapid selection of cases and images appropriate to a particular diagnosis, scan type, age or other search criteria. Correlative X-ray, CT, MRI and ultrasound images can also be stored and accessed. The application is in use at The New Children's Hospital as an aid to postgraduate medical education, with new cases being regularly added to the database

  16. Indexing Bibliographic Database Content Using MariaDB and Sphinx Search Server

    Directory of Open Access Journals (Sweden)

    Arie Nugraha

    2014-07-01

    Full Text Available Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description already stored inside a relational DBMS such as MariaDB (a fork of MySQL or PostgreSQL, Sphinx Search Server (Sphinx is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.

  17. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  18. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    Science.gov (United States)

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  19. JT-60 database system, 1

    International Nuclear Information System (INIS)

    Kurihara, Kenichi; Kimura, Toyoaki; Itoh, Yasuhiro.

    1987-07-01

    Naturally, sufficient software circumstance makes it possible to analyse the discharge result data effectively. JT-60 discharge result data, collected by the supervisor, are transferred to the general purpose computer through the new linkage channel, and are converted to ''database''. Datafile in the database was designed to be surrounded by various interfaces. This structure is able to preserve the datafile reliability and does not expect the user's information about the datafile structure. In addition, the support system for graphic processing was developed so that the user may easily obtain the figures with some calculations. This paper reports on the basic concept and system design. (author)

  20. Non-Radial Oscillation Modes of Superfluid Neutron Stars Modeled with CompOSE

    Directory of Open Access Journals (Sweden)

    Prashanth Jaikumar

    2018-03-01

    Full Text Available We compute the principal non-radial oscillation mode frequencies of Neutron Stars described with a Skyrme-like Equation of State (EoS, taking into account the possibility of neutron and proton superfluidity. Using the CompOSE database and interpolation routines to obtain the needed thermodynamic quantities, we solve the fluid oscillation equations numerically in the background of a fully relativistic star, and identify imprints of the superfluid state. Though these modes cannot be observed with current technology, increased sensitivity of future Gravitational-Wave Observatories could allow us to observe these oscillations and potentially constrain or refine models of dense matter relevant to the interior of neutron stars.

  1. Reldata - a tool for reliability database management

    International Nuclear Information System (INIS)

    Vinod, Gopika; Saraf, R.K.; Babar, A.K.; Sanyasi Rao, V.V.S.; Tharani, Rajiv

    2000-01-01

    Component failure, repair and maintenance data is a very important element of any Probabilistic Safety Assessment study. The credibility of the results of such study is enhanced if the data used is generated from operating experience of similar power plants. Towards this objective, a computerised database is designed, with fields such as, date and time of failure, component name, failure mode, failure cause, ways of failure detection, reactor operating power status, repair times, down time, etc. This leads to evaluation of plant specific failure rate, and on demand failure probability/unavailability for all components. Systematic data updation can provide a real time component reliability parameter statistics and trend analysis and this helps in planning maintenance strategies. A software package has been developed RELDATA, which incorporates the database management and data analysis methods. This report describes the software features and underlying methodology in detail. (author)

  2. Software design specification and analysis(NuFDS) approach for the safety critical software based on porgrammable logic controller(PLC)

    International Nuclear Information System (INIS)

    Koo, Seo Ryong; Seong, Poong Hyun; Jung, Jin Yong; Choi, Seong Soo

    2004-01-01

    This paper introduces the software design specification and analysis technique for the safety-critical system based on Programmable Logic Controller (PLC). During software development phases, the design phase should perform an important role to connect between requirements phase and implementation phase as a process of translating problem requirements into software structures. In this work, the Nuclear FBD-style Design Specification and analysis (NuFDS) approach was proposed. The NuFDS approach for nuclear Instrumentation and Control (I and C) software are suggested in a straight forward manner. It consists of four major specifications as follows; Database, Software Architecture, System Behavior, and PLC Hardware Configuration. Additionally, correctness, completeness, consistency, and traceability check techniques are also suggested for the formal design analysis in NuFDS approach. In addition, for the tool supporting, we are developing NuSDS tool based on the NuFDS approach which is a tool, especially for the software design specification in nuclear fields

  3. Nuclear Reaction and Structure Databases of the National Nuclear Data Center

    International Nuclear Information System (INIS)

    Pritychenko, B.; Arcilla, R.; Herman, M. W.; Oblozinsky, P.; Rochman, D.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2006-01-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic research and applied nuclear technologies. In 2004, the NNDC migrated all databases into modern relational database software, installed new generation of Linux servers and developed new Java-based Web service. This nuclear database development means much faster, more flexible and more convenient service to all users in the United States. These nuclear reaction and structure database developments as well as related Web services are briefly described

  4. EFOM 12C software: general overview. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Jadot, P; Fuchsova, J; Vankelecom, E; Van der Voort, E; Thonet, C

    1981-01-01

    A logic manual defining the general philosophy of the EC-12C software is presented. The guidelines used to develop the Energy Data Base and the programs of the energy flow models are: within the frame of some basic conventions, the data base structure and software should be as independent as possible from the energy system representation; utilization of the models should be user-friendly; as data has to be collected and manipulated by various national expert teams, extensive data consistency checks and appropriate error messages should be included in the software to support the data validation process; the various energy flow models should be integrated; the outputs of the programs should be user-controlled; and the scope of the study is under user's control. As a result of those guidelines, an integrated set of software composed of DAMOCLES (Data Base Management System); SIML (simulation program suitable for data analysis and description study); and ORESTE EDISON (LP matrix generator and report writer) was developed. These are described.

  5. The development of technical database of advanced spent fuel management process

    Energy Technology Data Exchange (ETDEWEB)

    Ro, Seung Gy; Byeon, Kee Hoh; Song, Dae Yong; Park, Seong Won; Shin, Young Jun

    1999-03-01

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig.

  6. The development of technical database of advanced spent fuel management process

    International Nuclear Information System (INIS)

    Ro, Seung Gy; Byeon, Kee Hoh; Song, Dae Yong; Park, Seong Won; Shin, Young Jun

    1999-03-01

    The purpose of this study is to develop the technical database system to provide useful information to researchers who study on the back end nuclear fuel cycle. Technical database of advanced spent fuel management process was developed for a prototype system in 1997. In 1998, this database system is improved into multi-user systems and appended special database which is composed of thermochemical formation data and reaction data. In this report, the detailed specification of our system design is described and the operating methods are illustrated as a user's manual. Also, expanding current system, or interfacing between this system and other system, this report is very useful as a reference. (Author). 10 refs., 18 tabs., 46 fig

  7. Prototyping visual interface for maintenance and supply databases

    OpenAIRE

    Fore, Henry Ray

    1989-01-01

    Approved for public release; distribution is unlimited This research examined the feasibility of providing a visual interface to standard Army Management Information Systems at the unit level. The potential of improving the Human-Machine Interface of unit level maintenance and supply software, such as ULLS (Unit Level Logistics System), is very attractive. A prototype was implemented in GLAD (Graphics Language for Database). GLAD is a graphics object-oriented environment for databases t...

  8. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  9. Conceptual data modeling on the KRR-1 and 2 decommissioning database

    International Nuclear Information System (INIS)

    Park, Hee Seoung; Park, Seung Kook; Lee, Kune Woo; Park, Jin Ho

    2002-01-01

    A study of the conceptual data modeling to realize the decommissioning database on the KRR-1 and 2 was carried out. In this study, the current state of the abroad decommissioning databased was investigated to make a reference of the database. A scope of the construction of decommissioning database has been set up based on user requirements. Then, a theory of the database construction was established and a scheme on the decommissioning information was classified. The facility information, work information, radioactive waste information, and radiological information dealing with the decommissioning database were extracted through interviews with an expert group and also decided upon the system configuration of the decommissioning database. A code which is composed of 17 bit was produced considering the construction, scheme and information. The results of the conceptual data modeling and the classification scheme will be used as basic data to create a prototype design of the decommissioning database

  10. The ATLAS conditions database architecture for the Muon spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Verducci, Monica, E-mail: monica.verducci@cern.c [University of Wuerzburg Am Hubland, 97074, Wuerzburg (Germany)

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  11. Software development for managing nuclear material database

    International Nuclear Information System (INIS)

    Tondin, Julio Benedito Marin

    2011-01-01

    In nuclear facilities, the nuclear material control is one of the most important activities. The Brazilian National Commission of Nuclear Energy (CNEN) and the International Atomic Energy Agency (IAEA), when inspecting routinely, regards the data provided as a major safety factor. Having a control system of nuclear material that allows the amount and location of the various items to be inspected, at any time, is a key factor today. The objective of this work was to enhance the existing system using a more friendly platform of development, through the VisualBasic programming language (Microsoft Corporation), to facilitate the operation team of the reactor IEA-R1 Reactor tasks, providing data that enable a better and prompter control of the IEA-R1 nuclear material. These data have allowed the development of papers presented at national and international conferences and the development of master's dissertations and doctorate theses. The software object of this study was designed to meet the requirements of the CNEN and the IAEA safeguard rules, but its functions may be expanded in accordance with future needs. The program developed can be used in other reactors to be built in the country, since it is very practical and allows an effective control of the nuclear material in the facilities. (author)

  12. General software design for multisensor data fusion

    Science.gov (United States)

    Zhang, Junliang; Zhao, Yuming

    1999-03-01

    In this paper a general method of software design for multisensor data fusion is discussed in detail, which adopts object-oriented technology under UNIX operation system. The software for multisensor data fusion is divided into six functional modules: data collection, database management, GIS, target display and alarming data simulation etc. Furthermore, the primary function, the components and some realization methods of each modular is given. The interfaces among these functional modular relations are discussed. The data exchange among each functional modular is performed by interprocess communication IPC, including message queue, semaphore and shared memory. Thus, each functional modular is executed independently, which reduces the dependence among functional modules and helps software programing and testing. This software for multisensor data fusion is designed as hierarchical structure by the inheritance character of classes. Each functional modular is abstracted and encapsulated through class structure, which avoids software redundancy and enhances readability.

  13. The software environment of RODOS

    International Nuclear Information System (INIS)

    Schuele, O.; Rafat, M.; Kossykh, V.

    1996-01-01

    The Software Environment of RODOS provides tools for processing and managing a large variety of different types of information, including those which are categorized in terms of meteorology, radiology, economy, emergency actions and countermeasures, rules, preferences, facts, maps, statistics, catalogues, models and methods. The main tasks of the Operating Subsystem OSY, which is based on the Client-Server Model, are the control of system operation, data management, and the exchange of information among various modules as well as the interaction with users in distributed computer systems. The paper describes the software environment of RODOS, in particular, the individual modules of its Operating Subsystem OSY, its distributed database, the geographical information system RoGIS, the on-line connections to radiological and meteorological networks and the software environment for the integration of external programs into the RODOS system

  14. The software environment of RODOS

    International Nuclear Information System (INIS)

    Schuele, O.; Rafat, M.

    1998-01-01

    The Software Environment of RODOS provides tools for processing and managing a large variety of different types of information, including those which are categorised in terms of meteorology, radiology, economy, emergency actions and countermeasures, rules, preferences, facts, maps, statistics, catalogues, models and methods. The main tasks of the Operating Subsystem OSY, which is based on the Client-Server Model, are the control of system operation, data management, and the exchange of information among various modules as well as the interaction with users in distributed computer systems. The paper describes the software environment of RODOS, in particular, the individual modules of its Operating Subsystem OSY, its distributed database, the geographical information system RoGIS, the on-line connections to radiological and meteorological networks and the software environment for the integration of external programs into the RODOS system. (orig.)

  15. The software environment of RODOS

    Energy Technology Data Exchange (ETDEWEB)

    Schuele, O; Rafat, M [Forschungszentrum Karlsruhe, Institut fuer Neutronenphysik und Reaktortechnik, Karlsruhe (Germany); Kossykh, V [Scientific Production Association ' TYPHOON' , Emergency Centre, Obninsk (Russian Federation)

    1996-07-01

    The Software Environment of RODOS provides tools for processing and managing a large variety of different types of information, including those which are categorized in terms of meteorology, radiology, economy, emergency actions and countermeasures, rules, preferences, facts, maps, statistics, catalogues, models and methods. The main tasks of the Operating Subsystem OSY, which is based on the Client-Server Model, are the control of system operation, data management, and the exchange of information among various modules as well as the interaction with users in distributed computer systems. The paper describes the software environment of RODOS, in particular, the individual modules of its Operating Subsystem OSY, its distributed database, the geographical information system RoGIS, the on-line connections to radiological and meteorological networks and the software environment for the integration of external programs into the RODOS system.

  16. Rhinoplasty perioperative database using a personal digital assistant.

    Science.gov (United States)

    Kotler, Howard S

    2004-01-01

    To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.

  17. Database Marketplace 2010: Feast and Famine

    Science.gov (United States)

    Tenopir, Carol; Baker, Gayle; Grogg, Jill

    2010-01-01

    With fancy new software developments and growth in both the richness of content and delivery options for information resources, the Database Marketplace 2010 is a feast for buyers. Unfortunately, institutional budget cuts may force more of a famine mentality--with belt-tightening for most, and only purchases that are life-sustaining being served…

  18. CHIANTI—AN ATOMIC DATABASE FOR EMISSION LINES. XII. VERSION 7 OF THE DATABASE

    International Nuclear Information System (INIS)

    Landi, E.; Del Zanna, G.; Mason, H. E.; Young, P. R.; Dere, K. P.

    2012-01-01

    The CHIANTI spectral code consists of an atomic database and a suite of computer programs to calculate the optically thin spectrum of astrophysical objects and carry out spectroscopic plasma diagnostics. The database includes atomic energy levels, wavelengths, radiative transition probabilities, collision excitation rate coefficients, and ionization and recombination rate coefficients, as well as data to calculate free-free, free-bound, and two-photon continuum emission. Version 7 has been released, which includes several new ions, significant updates to existing ions, as well as Chianti-Py, the implementation of CHIANTI software in the Python programming language. All data and programs are freely available at http://www.chiantidatabase.org, while the Python interface to CHIANTI can be found at http://chiantipy.sourceforge.net.

  19. CHIANTI-AN ATOMIC DATABASE FOR EMISSION LINES. XII. VERSION 7 OF THE DATABASE

    Energy Technology Data Exchange (ETDEWEB)

    Landi, E. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Del Zanna, G.; Mason, H. E. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Young, P. R. [College of Science, George Mason University, 4400 University Drive, Fairfax, VA, 22030 (United States); Dere, K. P. [School of Physics, Astronomy and Computational Sciences, MS 6A2, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States)

    2012-01-10

    The CHIANTI spectral code consists of an atomic database and a suite of computer programs to calculate the optically thin spectrum of astrophysical objects and carry out spectroscopic plasma diagnostics. The database includes atomic energy levels, wavelengths, radiative transition probabilities, collision excitation rate coefficients, and ionization and recombination rate coefficients, as well as data to calculate free-free, free-bound, and two-photon continuum emission. Version 7 has been released, which includes several new ions, significant updates to existing ions, as well as Chianti-Py, the implementation of CHIANTI software in the Python programming language. All data and programs are freely available at http://www.chiantidatabase.org, while the Python interface to CHIANTI can be found at http://chiantipy.sourceforge.net.

  20. DNAGear- a free software for spa type identification in Staphylococcus aureus

    Science.gov (United States)

    2012-01-01

    Background Staphylococcus aureus is both human commensal and an important human pathogen, responsible for community-acquired and nosocomial infections ranging from superficial wound infections to invasive infections, such as osteomyelitis, bacteremia and endocarditis, pneumonia or toxin shock syndrome with a mortality rate up to 40%. S. aureus reveals a high genetic polymorphism and detecting the genotypes is extremely useful to manage and prevent possible outbreaks and to understand the route of infection. One of current and expanded typing method is based on the X region of the spa gene composed of a succession of repeats of 21 to 27 bp. More than 10000 types are known. Extracting the repeats is impossible by hand and needs a dedicated software. Unfortunately the only software on the market is a commercial program from Ridom. Findings This article presents DNAGear, a free and open source software with a user friendly interface written all in Java on top of NetBeans Platform to perform spa typing, detecting new repeats and new spa types and synchronizing automatically the files with the open access database. The installation is easy and the application is platform independent. In fact, the SPA identification is a formal regular expression matching problem and the results are 100% exact. As the program is using Java embedded modules written over string manipulation of well established algorithms, the exactitude of the solution is perfectly established. Conclusions DNAGear is able to identify the types of the S. aureus sequences and detect both new types and repeats. Comparing to manual processing, which is time consuming and error prone, this application saves a lot of time and effort and gives very reliable results. Additionally, the users do not need to prepare the forward-reverse sequences manually, or even by using additional tools. They can simply create them in DNAGear and perform the typing task. In short, researchers who do not have commercial software will

  1. DNAGear--a free software for spa type identification in Staphylococcus aureus.

    Science.gov (United States)

    AL-Tam, Faroq; Brunel, Anne-Sophie; Bouzinbi, Nicolas; Corne, Philippe; Bañuls, Anne-Laure; Shahbazkia, Hamid Reza

    2012-11-19

    Staphylococcus aureus is both human commensal and an important human pathogen, responsible for community-acquired and nosocomial infections ranging from superficial wound infections to invasive infections, such as osteomyelitis, bacteremia and endocarditis, pneumonia or toxin shock syndrome with a mortality rate up to 40%. S. aureus reveals a high genetic polymorphism and detecting the genotypes is extremely useful to manage and prevent possible outbreaks and to understand the route of infection. One of current and expanded typing method is based on the X region of the spa gene composed of a succession of repeats of 21 to 27 bp. More than 10000 types are known. Extracting the repeats is impossible by hand and needs a dedicated software. Unfortunately the only software on the market is a commercial program from Ridom. This article presents DNAGear, a free and open source software with a user friendly interface written all in Java on top of NetBeans Platform to perform spa typing, detecting new repeats and new spa types and synchronizing automatically the files with the open access database. The installation is easy and the application is platform independent. In fact, the SPA identification is a formal regular expression matching problem and the results are 100% exact. As the program is using Java embedded modules written over string manipulation of well established algorithms, the exactitude of the solution is perfectly established. DNAGear is able to identify the types of the S. aureus sequences and detect both new types and repeats. Comparing to manual processing, which is time consuming and error prone, this application saves a lot of time and effort and gives very reliable results. Additionally, the users do not need to prepare the forward-reverse sequences manually, or even by using additional tools. They can simply create them in DNAGear and perform the typing task. In short, researchers who do not have commercial software will benefit a lot from this

  2. The detector of BES III muon constructs with the quality control database

    International Nuclear Information System (INIS)

    Yao Ning; Chinese Academy of Sciences, Beijing; Zheng Guoheng; Yang Lei; Zhang Jiawen; Han Jifeng; Xie Yuguang; Zhao Jianbing; Chen Jin

    2006-01-01

    Because of these softwares' characters, the authors use MySQL, PHP, Apache to construct our quality control database. The authors show the structure of BES MUON Detector and explain the reason why we must construct database. The authors show the results that our database can present. People can access the system through its web site, which retrieves data on request from the database and can display results in dynamically created images. The database is the transparent technique support platform of the maintenance of the detector. (authors)

  3. Automation of the software production process for multiple cryogenic control applications

    OpenAIRE

    Fluder, Czeslaw; Lefebvre, Victor; Pezzetti, Marco; Plutecki, Przemyslaw; Tovar-González, Antonio; Wolak, Tomasz

    2018-01-01

    The development of process control systems for the cryogenic infrastructure at CERN is based on an automatic software generation approach. The overall complexity of the systems, their frequent evolution as well as the extensive use of databases, repositories, commercial engineering software and CERN frameworks have led to further efforts towards improving the existing automation based software production methodology. A large number of control system upgrades were successfully performed for th...

  4. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  5. Open source hardware and software platform for robotics and artificial intelligence applications

    Science.gov (United States)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  6. Open source hardware and software platform for robotics and artificial intelligence applications

    International Nuclear Information System (INIS)

    Liang, S Ng; Tan, K O; Clement, T H Lai; Ng, S K; Mohammed, A H Ali; Mailah, Musa; Yussof, Wan Azhar; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-01-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots. (paper)

  7. Advanced Transport Operating System (ATOPS) utility library software description

    Science.gov (United States)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  8. User’s Manual for the Simulation of Energy Consumption and Emissions from Rail Traffic Software Package

    DEFF Research Database (Denmark)

    Cordiero, Tiago M.; Lindgreen, Erik Bjørn Grønning; Sorenson, Spencer C

    2005-01-01

    The ARTEMIS rail emissions model was implemented in a Microsoft Excel software package that includes data from the GISCO database on railway traffic. This report is the user’s manual for the aforementioned software that includes information on how to run the program and an overview on how...... of Excel Macros (Visual Basic) and database sheets included in one Excel file...

  9. CD-ROM for the PGAA-IAEA database

    International Nuclear Information System (INIS)

    Firestone, R.B.; Zerking, V.

    2007-01-01

    Both the database of prompt gamma rays from slow neutron capture for elemental analysis and the results of this CRP are available on the accompanying CD-ROM. The file index.html is the home page for the CD-ROM, and provides links to the following information: (a) The CRP - General information, papers and reports relevant to this CRP. (b) The PGAA-IAEA database viewer - An interactive program to display and search the PGAA database by isotope, energy or capture cross-section. (c) The Database of Prompt Gamma Rays from Slow Neutron Capture for Elemental Analysis - This report. (d) The PGAA database files - Adopted PGAA database and associated files in EXCEL, PDF and Text formats. The archival databases by Lone et al. and by Reedy and Frankle are also available. (e) The Evaluated Gamma-Ray Activation File (EGAF) - The adopted PGAA database in ENSDF format. Data can be viewed with the Isotope Explorer 2.2 ENSDF Viewer. (f) The PGAA database evaluation - ENSDF format versions of the adopted PGAA database, and the Budapest and ENSDF isotopic input files. Decay scheme balance and statistical analysis summaries are provided. (g) The Isotope Explorer 2.2 ENSDF viewer - Windows software for viewing the level scheme drawings and tables provided in ENSDF format. The complete ENSDF database is included, as of December 2002. The databases and viewers are discussed in greater detail in the following sections

  10. Database tools for enhanced analysis of TMX-U data. Revision 1

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  11. Aspect-Oriented Model-Driven Software Product Line Engineering

    Science.gov (United States)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  12. DIII-D physics analysis database

    International Nuclear Information System (INIS)

    Bramson, G.; Schissel, D.P.; DeBoo, J.C.; St John, H.

    1990-10-01

    Since June 1986 the DIII-D tokamak has had over 16000 discharges accumulating more than 250 Gigabytes of raw data (currently over 30 Mbytes per discharge). The centralized DIII-D databases and the associated support software described earlier provide the means to extract, analyze, store, and display reduced sets of data for specific physics issues. The confinement, stability, transition, and cleanliness databases consist of more than 7500 records of basic reduced diagnostic data datasets. Each database record corresponds to a specific snapshot in time for a selected discharge. Recently some profile datasets have been implemented. Diagnostic data are fit by a cubic spline or a parabola by the in-house ENERGY code to provide density, temperature, radiated power, effective charge (Z eff ), and rotation velocity profiles. These fits are stored in the profile datasets which are inputs for the ONETWO code which computes transport data. 3 refs., 4 figs

  13. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  14. DESIGN AND CONSTRUCTION OF A FOREST SPATIAL DATABASE: AN APPLICATION

    Directory of Open Access Journals (Sweden)

    Turan Sönmez

    2006-11-01

    Full Text Available General Directorate of Forests (GDF has not yet created the spatial forest database to manage forest and catch the developed countries in forestry. The lack of spatial forest database results in collection of the spatial data redundancy, communication problems among the forestry organizations. Also it causes Turkish forestry to be backward of informatics’ era. To solve these problems; GDF should establish spatial forest database supported Geographic Information System (GIS. To design the spatial database, supported GIS, which provides accurate, on time and current data/info for decision makers and operators in forestry, and to develop sample interface program to apply and monitor classical forest management plans is paramount in contemporary forest management planning process. This research is composed of three major stages: (i spatial rototype database design considering required by the three hierarchical organizations of GDF (regional directorate of forests, forest enterprise, and territorial division, (ii user interface program developed to apply and monitor classical management plans based on the designed database, (iii the implementation of the designed database and its user interface in Artvin Central Planning Unit.

  15. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  16. A spatial database for landslides in northern Bavaria: A methodological approach

    Science.gov (United States)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  17. Development of the software dead time methodology for the 4πβ-γ software coincidence system analysis program

    International Nuclear Information System (INIS)

    Toledo, Fabio de; Brancaccio, Franco; Dias, Mauro da Silva

    2009-01-01

    The Laboratorio de Metrologia Nuclear - LMN, Nuclear Metrology Laboratory -, at IPEN-CNEN/SP, Sao Paulo, Brazil, developed a new Software Coincidence System (SCS) for 4πβ-γ radioisotope standardization. SCS is composed by the data acquisition hardware, for the coincidence data recording, and the coincidence data analysis program that performs the radioactive activity calculation for the target sample. Due to hardware intrinsic signal sampling characteristics, multiple undesired data recording occurs from a single saturated pulse. Also pulse pileup leads to bad data recording. As the beta counting rates are much greater than the gamma ones, due to the high 4π geometry beta detecting efficiencies, the beta counting significantly increases because of multiple pulse recordings, resulting in a respective increasing in the calculated activity value. In order to minimize such bad recordings effect, a software dead time value was introduced in the coincidence analysis program, under development at LMN, discarding multiple recordings, due to pulse pileup or saturation. This work presents the methodology developed to determine the optimal software dead time data value, for better accuracy results attaining, and discusses the results, pointing to software improvement possibilities. (author)

  18. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  19. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  20. Database on Demand: insight how to build your own DBaaS

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  1. Database on Demand: insight how to build your own DBaaS

    Science.gov (United States)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  2. Access database application in medical treatment management platform

    International Nuclear Information System (INIS)

    Wu Qingming

    2014-01-01

    For timely, accurate and flexible access to medical expenses data, we applied Microsoft Access 2003 database management software, and we finished the establishment of a management platform for medical expenses. By developing management platform for medical expenses, overall hospital costs for medical expenses can be controlled to achieve a real-time monitoring of medical expenses. Using the Access database management platform for medical expenses not only changes the management model, but also promotes a sound management system for medical expenses. (authors)

  3. The Brainomics/Localizer database.

    Science.gov (United States)

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. EDS operator and control software

    International Nuclear Information System (INIS)

    Ott, L.L.

    1985-04-01

    The Enrichment Diagnostic System (EDS) was developed at Lawrence Livermore National Laboratory (LLNL) to acquire, display and analyze large quantities of transient data for a real-time Advanced Vapor Laser Isotope Separation (AVLIS) experiment. Major topics discussed in this paper are the EDS operator interface (SHELL) program, the data acquisition and analysis scheduling software, and the graphics software. The workstation concept used in EDS, the software used to configure a user's workstation, and the ownership and management of a diagnostic are described. An EDS diagnostic is a combination of hardware and software designed to study specific aspects of the process. Overall system performance is discussed from the standpoint of scheduling techniques, evaluation tools, optimization techniques, and program-to-program communication methods. EDS is based on a data driven design which keeps the need to modify software to a minimum. This design requires a fast and reliable data base management system. A third party data base management product, Berkeley Software System Database, written explicitly for HP1000's, is used for all EDS data bases. All graphics is done with an in-house graphics product, Device Independent Graphics Library (DIGLIB). Examples of devices supported by DIGLIB are: Versatec printer/plotters, Raster Technologies Graphic Display Controllers, and HP terminals (HP264x and HP262x). The benefits derived by using HP hardware and software as well as obstacles imposed by the HP environment are presented in relation to EDS development and implementation

  5. Noise data management using commercially available data-base software

    International Nuclear Information System (INIS)

    Damiano, B.; Thie, J.A.

    1988-01-01

    A data base has been created using commercially available software to manage the data collected by an automated noise data acquisition system operated by Oak Ridge National Laboratory at the Fast Flux Test Facility (FFTF). The data base was created to store, organize, and retrieve selected features of the nuclear and process signal noise data, because the large volume of data collected by the automated system makes manual data handling and interpretation based on visual examination of noise signatures impractical. Compared with manual data handling, use of the data base allows the automatically collected data to be utilized more fully and effectively. The FFTF noise data base uses the Oracle Relational Data Base Management System implemented on a desktop personal computer

  6. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  7. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  8. Inside a VAMDC data node—putting standards into practical software

    Science.gov (United States)

    Regandell, Samuel; Marquart, Thomas; Piskunov, Nikolai

    2018-03-01

    Access to molecular and atomic data is critical for many forms of remote sensing analysis across different fields. Many atomic and molecular databases are however highly specialised for their intended application, complicating querying and combination data between sources. The Virtual Atomic and Molecular Data Centre, VAMDC, is an electronic infrastructure that allows each database to register as a ‘node’. Through services such as VAMDC’s portal website, users can then access and query all nodes in a homogenised way. Today all major Atomic and Molecular databases are attached to VAMDC This article describes the software tools we developed to help data providers create and manage a VAMDC node. It gives an overview of the VAMDC infrastructure and of the various standards it uses. The article then discusses the development choices made and how the standards are implemented in practice. It concludes with a full example of implementing a VAMDC node using a real-life case as well as future plans for the node software.

  9. Children Composing and the Tonal Idiom

    Science.gov (United States)

    Roels, Johanna Maria; Van Petegem, Peter

    2016-01-01

    Existing studies have demonstrated how children compose, experiment and use their imagination within the conventions of the tonal idiom with functional harmony. However, one area of research that has hardly been explored is how tonality emerges in the compositions of children who compose by transforming their own non-musical ideas, such as their…

  10. Adolescents' Dialogic Composing with Mobile Phones

    Science.gov (United States)

    Warner, Julie

    2016-01-01

    This 14-month study examined the phone-based composing practice of three adolescents. Given the centrality of mobile phones to youth culture, the researcher sought to create a description of the participants' composing practices with these devices. Focal participants were users of Twitter and Instagram, two social media platforms that are usually…

  11. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  12. Universally composable protocols with relaxed set-up assumptions

    DEFF Research Database (Denmark)

    Barak, Boaz; Canetti, Ran; Nielsen, Jesper Buus

    2004-01-01

    A desirable goal for cryptographic protocols is to guarantee security when the protocol is composed with other protocol instances. Universally composable (UC) protocols provide this guarantee in a strong sense: A protocol remains secure even when composed concurrently with an unbounded number of ...

  13. Asynchronous data change notification between database server and accelerator controls system

    International Nuclear Information System (INIS)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-01-01

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.

  14. A process control software package for the SRS

    International Nuclear Information System (INIS)

    Atkins, V.R.; Poole, D.E.; Rawlinson, W.R.

    1980-03-01

    The development of software to give high level access from application programs for monitoring and control of the Daresbury Synchrotron Radiation Source on a network-wide basis is described. The design and implementation of the control system database, a special supervisor call and and 'executive' type task handling of all process input/output services for the 7/32 (which runs under 05/32-MT), and process control 'device driver' software for the 7/16 (run under L5/16-MT) are included. (UK)

  15. Development of an automation software for reconciliation of INIS/ETDE thesauruses

    International Nuclear Information System (INIS)

    Singh, Manoj; Gupta, Rajiv; Prakasan, E.R.; Vijai Kumar

    1999-01-01

    ETDE (Energy Technology Data Exchange) and INIS (International Nuclear Information System) thesauruses contain nearly twenty thousand descriptors and are not necessarily identical. A project has been undertaken by the international organisations to make a common thesaurus for both INIS and ETDE to facilitate better exchange and retrieval of information between/from these databases. This paper describes the automation implemented during our participation in the project for reconcile the structures of the word blocks in the ETDE and INIS thesauruses, with respect to the descriptors currently in the two thesauruses through a PC based RDBMS Software. The software THEMERGE was developed in FoxPro 2.5 Relational Database Management Systems. The software handles all possible reconcile recommendation suggested by specialist, printing the recommendation sheet for uploading it later. This has not only widened the scope of flexibility, portability and convertibility of recommendations, but also helped to achieve quicker project completion. (author)

  16. Integrating Free Computer Software in Chemistry and Biochemistry Instruction: An International Collaboration

    Science.gov (United States)

    Cedeno, David L.; Jones, Marjorie A.; Friesen, Jon A.; Wirtz, Mark W.; Rios, Luz Amalia; Ocampo, Gonzalo Taborda

    2010-01-01

    At the Universidad de Caldas, Manizales, Colombia, we used their new computer facilities to introduce chemistry graduate students to biochemical database mining and quantum chemistry calculations using freeware. These hands-on workshops allowed the students a strong introduction to easily accessible software and how to use this software to begin…

  17. Design And Implementation Of Tool For Detecting Anti-Patterns In Relational Database

    Directory of Open Access Journals (Sweden)

    Gaurav Kumar

    2017-07-01

    Full Text Available Anti-patterns are poor solution to design and im-plementation problems. Developers may introduce anti-patterns in their software systems because of time pressure lack of understanding communication and or-skills. Anti-patterns create problems in software maintenance and development. Database anti-patterns lead to complex and time consuming query process-ing and loss of integrity constraints. Detecting anti-patterns could reduce costs efforts and resources. Researchers have proposed approaches to detect anti-patterns in software development. But not much research has been done about database anti-patterns. This report presents two approaches to detect schema design anti-patterns in relational database. Our first approach is based on pattern matchingwe look into potential candidates based on schema patterns. Second approach is a machine learning based approach we generate features of possible anti-patterns and build SVMbased classifier to detect them. Here we look into these four anti-patterns a Multi-valued attribute b Nave tree based c Entity Attribute Value and dPolymorphic Association . We measure precision and recall of each approach and compare the results. SVM-based approach provides more precision and recall with more training dataset.

  18. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    Science.gov (United States)

    2017-11-01

    the Army Modular Active Protection System (MAPS) program to provide end-to-end APS modeling and simulation capabilities. The SSES simulation features...research project of scalable database design was initiated in support of SSES modularization efforts with respect to 4 major software components...Iron Curtain KE kinetic energy MAPS Modular Active Protective System OLE DB object linking and embedding database RDB relational database RPG

  19. Database on epidemiological survey in high background radiation research

    International Nuclear Information System (INIS)

    Zhou Sunyuan; Guo Furong; Liu Yusheng

    1992-01-01

    In order to store and check the data of the health survey in high background radiation area (HBRA) and control area in Guangdong Province, and to use these data in future, three databases were set up by using RBASE 5000 database software. (1) HD: the database based on the household registers especially established for the health survey from 1979 to 1986, covering more than 160000 subjects and 2200000 data. (2) DC: the database based on the registration cards of deaths from cancers and all other diseases during the period of 1975-1986 including more than 10000 cases and 260000 data. (3) MCC: the database for the case-control study on mutation-related factors for four kinds of cancers (liver, stomach, lung cancers and leukemia), embracing 626 subjects and close to 90000 data. The data in the databases were checked up with the original records and compared with the manual analytical results

  20. The HITRAN 2004 molecular spectroscopic database

    Energy Technology Data Exchange (ETDEWEB)

    Rothman, L.S. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States)]. E-mail: lrothman@cfa.harvard.edu; Jacquemart, D. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States); Barbe, A. [Universite de Reims-Champagne-Ardenne, Groupe de Spectrometrie Moleculaire et Atmospherique, 51062 Reims (France)] (and others)

    2005-12-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing.

  1. The HITRAN 2004 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Jacquemart, D.; Barbe, A.

    2005-01-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing

  2. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  3. Development of design and analysis software for advanced nuclear system

    International Nuclear Information System (INIS)

    Wu Yican; Hu Liqin; Long Pengcheng; Luo Yuetong; Li Yazhou; Zeng Qin; Lu Lei; Zhang Junjun; Zou Jun; Xu Dezheng; Bai Yunqing; Zhou Tao; Chen Hongli; Peng Lei; Song Yong; Huang Qunying

    2010-01-01

    A series of professional codes, which are necessary software tools and data libraries for advanced nuclear system design and analysis, were developed by the FDS Team, including the codes of automatic modeling, physics and engineering calculation, virtual simulation and visualization, system engineering and safety analysis and the related database management etc. The development of these software series was proposed as an exercise of development of nuclear informatics. This paper introduced the main functions and key techniques of the software series, as well as some tests and practical applications. (authors)

  4. LHCb Conditions database operation assistance systems

    International Nuclear Information System (INIS)

    Clemencic, M; Shapoval, I; Cattaneo, M; Degaudenzi, H; Santinelli, R

    2012-01-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  5. Current Comparative Table (CCT) automates customized searches of dynamic biological databases.

    Science.gov (United States)

    Landsteiner, Benjamin R; Olson, Michael R; Rutherford, Robert

    2005-07-01

    The Current Comparative Table (CCT) software program enables working biologists to automate customized bioinformatics searches, typically of remote sequence or HMM (hidden Markov model) databases. CCT currently supports BLAST, hmmpfam and other programs useful for gene and ortholog identification. The software is web based, has a BioPerl core and can be used remotely via a browser or locally on Mac OS X or Linux machines. CCT is particularly useful to scientists who study large sets of molecules in today's evolving information landscape because it color-codes all result files by age and highlights even tiny changes in sequence or annotation. By empowering non-bioinformaticians to automate custom searches and examine current results in context at a glance, CCT allows a remote database submission in the evening to influence the next morning's bench experiment. A demonstration of CCT is available at http://orb.public.stolaf.edu/CCTdemo and the open source software is freely available from http://sourceforge.net/projects/orb-cct.

  6. Essential Features for a Scholarly Journal Content Management and Peer Review Software

    Directory of Open Access Journals (Sweden)

    Fatima Sheikh Shoaie

    2010-03-01

    Full Text Available   The present study investigates the software used in scientific journals for content management and peer review, in order to identify the essential features. These softwares are analyzed and presented in tabular format. A questionnaire was prepared and submitted to a panel composed of 15 referees, editor in chief, software designers and researchers. The essential features for a software managing the review process were divided into three groups with populations of 10-15, 5-10 and 0-5 respectively. The majority of peer review process software features, in view of panelists, fell into a group of features with a population of 10-15. Finally it should be said that the features represented by the first group must be taken into account when designing or purchasing a peer review software. The second tier features (with population of 5-10 are recommended given journal's status and capabilities. The third tier features were altogether discounted due to low population

  7. Database management in the new GANIL control system

    International Nuclear Information System (INIS)

    Lecorche, E.; Lermine, P.

    1993-01-01

    At the start of the new control system design, decision was made to manage the huge amount of data by means of a database management system. The first implementations built on the INGRES relational database are described. Real time and data management domains are shown, and problems induced by Ada/SQL interfacing are briefly discussed. Database management concerns the whole hardware and software configuration for the GANIL pieces of equipment and the alarm system either for the alarm configuration or for the alarm logs. An other field of application encompasses the beam parameter archiving as a function of the various kinds of beams accelerated at GANIL (ion species, energies, charge states). (author) 3 refs., 4 figs

  8. Computational tools and resources for metabolism-related property predictions. 1. Overview of publicly available (free and commercial) databases and software.

    Science.gov (United States)

    Peach, Megan L; Zakharov, Alexey V; Liu, Ruifeng; Pugliese, Angelo; Tawa, Gregory; Wallqvist, Anders; Nicklaus, Marc C

    2012-10-01

    Metabolism has been identified as a defining factor in drug development success or failure because of its impact on many aspects of drug pharmacology, including bioavailability, half-life and toxicity. In this article, we provide an outline and descriptions of the resources for metabolism-related property predictions that are currently either freely or commercially available to the public. These resources include databases with data on, and software for prediction of, several end points: metabolite formation, sites of metabolic transformation, binding to metabolizing enzymes and metabolic stability. We attempt to place each tool in historical context and describe, wherever possible, the data it was based on. For predictions of interactions with metabolizing enzymes, we show a typical set of results for a small test set of compounds. Our aim is to give a clear overview of the areas and aspects of metabolism prediction in which the currently available resources are useful and accurate, and the areas in which they are inadequate or missing entirely.

  9. Geo-scientific database for research and development purposes

    International Nuclear Information System (INIS)

    Tabani, P.; Mangeot, A.; Crabol, V.; Delage, P.; Dewonck, S.; Auriere, C.

    2012-01-01

    Document available in extended abstract form only. The Research and Development Division must manage, secure and reliable manner, a large number of data from scientific disciplines and diverse means of acquisition (observations, measurements, experiments, etc.). This management is particularly important for the Underground research Laboratory, the source of many recording continuous measurements. Thus, from its conception, Andra has implemented two management tools of scientific information, the 'Acquisition System and Data Management' [SAGD] and GEO database with its associated applications. Beyond its own needs, Andra wants to share its achievements with the scientific community, and it therefore provides the data stored in its databases or samples of rock or water when they are available. Acquisition and Data Management (SAGD) This system manages data from sensors installed at several sites. Some sites are on the surface (piezometric, atmospheric and environmental stations), the other are in the Underground Research Laboratory. This system also incorporates data from experiments in which Andra participates in Mont Terri Laboratory in Switzerland. S.A.G.D fulfils these objectives by: - Make available in real time on a single system, with scientists from Andra but also different partners or providers who need it, all experimental data from measurement points - Displaying the recorded data on temporal windows and specific time step, - Allowing remote control of the experimentations, - Ensuring the traceability of all recorded information, - Ensuring data storage in a data base. S.A.G.D has been deployed in the first experimental drift at -445 m in November 2004. It was subsequently extended to the underground Mont Terri laboratory in Switzerland in 2005, to the entire surface logging network of the Meuse / Haute-Marne Center in 2008 and to the environmental network in 2011. All information is acquired, stored and manage by a software called Geoscope. This software

  10. The development of software and formation of a database on the main sources of environmental contamination in areas around nuclear power plants

    International Nuclear Information System (INIS)

    Palitskaya, T.A.; Novikov, A.V.; Makeicheva, M.A.; Ivanov, E.A.

    2004-01-01

    Providing of environmental safety control in the process of nuclear power plants (NPPs) operation, environmental protection and rational use of the natural resources is one of the most important tasks of the Rosenergoatom Concern. To ensure the environmental safety, trustworthy, complete and timely information is needed on the natural resources availability and condition, on the natural environment quality and its contamination level. The industrial environmental monitoring allows obtaining, processing and evaluating data for making environmentally acceptable and economically efficient decisions. The industrial environmental monitoring system at NPPs is formed taking into account both radiation and non-radiation factors of impact. Obtaining data on non-radiation factors of the NPP impact is provide by a complex of special observations carried out by NPP's environment protection services. The gained information is transmitted to the Rosenergoatom Concern and input to a database of the Environment Protection Division of the Concern Department of Radiation Safety, Environment Protection and Nuclear Materials Accounting. The database on the main sources of environmental contamination in the areas around NPPs will provide the high level of the environmental control authenticity, maintenance of the set standards, and also - automation of the most labor-consuming and frequently repeating types of operations. he applied software is being developed by specialists from the All-Russia Research Institute of Nuclear Power Plants on the basis of the database management system Microsoft SQL Server using VBA and Microsoft Access. The data will be transmitted through open communication channels. The geo-referenced digital mapping information, basing on the ArcGIS and MapInfo will be the main forms of output data presentation. The Federal authority bodies, their regional units and the Concern's sub-divisions involved in the environmental protection activities will be the database

  11. [Development of a Software for Automatically Generated Contours in Eclipse TPS].

    Science.gov (United States)

    Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin

    2015-03-01

    The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.

  12. Design and realization of reports of database in VC++. net

    International Nuclear Information System (INIS)

    Zhu Haijun; Shen Liren; Liu Dekang

    2006-01-01

    The design and realization of reports of database on the basis of VC ++ . net is presented. In the first, report template using word format files is introduced, and the method of filling the data table up according to the database is expatiated. The function of preview and printing calling word software is analyzed. The key code of how to generate reports automatically with Visual C ++ . net is given. (authors)

  13. RECOVIR Software for Identifying Viruses

    Science.gov (United States)

    Chakravarty, Sugoto; Fox, George E.; Zhu, Dianhui

    2013-01-01

    Most single-stranded RNA (ssRNA) viruses mutate rapidly to generate a large number of strains with highly divergent capsid sequences. Determining the capsid residues or nucleotides that uniquely characterize these strains is critical in understanding the strain diversity of these viruses. RECOVIR (an acronym for "recognize viruses") software predicts the strains of some ssRNA viruses from their limited sequence data. Novel phylogenetic-tree-based databases of protein or nucleic acid residues that uniquely characterize these virus strains are created. Strains of input virus sequences (partial or complete) are predicted through residue-wise comparisons with the databases. RECOVIR uses unique characterizing residues to identify automatically strains of partial or complete capsid sequences of picorna and caliciviruses, two of the most highly diverse ssRNA virus families. Partition-wise comparisons of the database residues with the corresponding residues of more than 300 complete and partial sequences of these viruses resulted in correct strain identification for all of these sequences. This study shows the feasibility of creating databases of hitherto unknown residues uniquely characterizing the capsid sequences of two of the most highly divergent ssRNA virus families. These databases enable automated strain identification from partial or complete capsid sequences of these human and animal pathogens.

  14. Music and emotion-a composer's perspective.

    Science.gov (United States)

    Douek, Joel

    2013-01-01

    This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavors to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.

  15. Handling of network and database instabilities in CORAL

    International Nuclear Information System (INIS)

    Trentadue, R; Valassi, A; Kalkhof, A

    2012-01-01

    The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several back-ends and deployment models, direct client access to Oracle servers being one of the most important use cases. Since 2010, several problems have been reported by the LHC experiments in their use of Oracle through CORAL, involving application errors, hangs or crashes after the network or the database servers became temporarily unavailable. CORAL already provided some level of handling of these instabilities, which are due to external causes and cannot be avoided, but this proved to be insufficient in some cases and to be itself the cause of other problems, such as the hangs and crashes mentioned before, in other cases. As a consequence, a major redesign of the CORAL plugins has been implemented, with the aim of making the software more robust against these database and network glitches. The new implementation ensures that CORAL automatically reconnects to Oracle databases in a transparent way whenever possible and gently terminates the application when this is not possible. Internally, this is done by resetting all relevant parameters of the underlying back-end technology (OCI, the Oracle Call Interface). This presentation reports on the status of this work at the time of the CHEP2012 conference, covering the design and implementation of these new features and the outlook for future developments in this area.

  16. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  17. Designing a Signal Conditioning System with Software Calibration for Resistor-feedback Patch Clamp Amplifier.

    Science.gov (United States)

    Hu, Gang; Zhu, Quanhui; Qu, Anlian

    2005-01-01

    In this paper, a programmable signal conditioning system based on software calibration for resistor-feedback patch clamp amplifier (PCA) has been described, this system is mainly composed of frequency correction, programmable gain and filter whose parameters are configured by software automatically to minimize the errors, A lab-designed data acquisition system (DAQ) is used to implement data collections and communications with PC. The laboratory test results show good agreement with design specifications.

  18. The UMIST database for astrochemistry 2006

    Science.gov (United States)

    Woodall, J.; Agúndez, M.; Markwick-Kemper, A. J.; Millar, T. J.

    2007-05-01

    Aims:We present a new version of the UMIST Database for Astrochemistry, the fourth such version to be released to the public. The current version contains some 4573 binary gas-phase reactions, an increase of 10% from the previous (1999) version, among 420 species, of which 23 are new to the database. Methods: Major updates have been made to ion-neutral reactions, neutral-neutral reactions, particularly at low temperature, and dissociative recombination reactions. We have included for the first time the interstellar chemistry of fluorine. In addition to the usual database, we have also released a reaction set in which the effects of dipole-enhanced ion-neutral rate coefficients are included. Results: These two reactions sets have been used in a dark cloud model and the results of these models are presented and discussed briefly. The database and associated software are available on the World Wide Web at www.udfa.net. Tables 1, 2, 4 and 9 are only available in electronic form at http://www.aanda.org

  19. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    International Nuclear Information System (INIS)

    Mandic, D.; Kocnar, R.; Sucic, B.

    2002-01-01

    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  20. VariVis: a visualisation toolkit for variation databases

    Directory of Open Access Journals (Sweden)

    Smith Timothy D

    2008-04-01

    Full Text Available Abstract Background With the completion of the Human Genome Project and recent advancements in mutation detection technologies, the volume of data available on genetic variations has risen considerably. These data are stored in online variation databases and provide important clues to the cause of diseases and potential side effects or resistance to drugs. However, the data presentation techniques employed by most of these databases make them difficult to use and understand. Results Here we present a visualisation toolkit that can be employed by online variation databases to generate graphical models of gene sequence with corresponding variations and their consequences. The VariVis software package can run on any web server capable of executing Perl CGI scripts and can interface with numerous Database Management Systems and "flat-file" data files. VariVis produces two easily understandable graphical depictions of any gene sequence and matches these with variant data. While developed with the goal of improving the utility of human variation databases, the VariVis package can be used in any variation database to enhance utilisation of, and access to, critical information.

  1. LHCb Conditions Database Operation Assistance Systems

    CERN Multimedia

    Shapoval, Illya

    2012-01-01

    The Conditions Database of the LHCb experiment (CondDB) provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger, reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues: - an extension to the automatic content validation done by the “Oracle Streams” replication technology, to trap cases when the replication was unsuccessful; - an automated distribution process for the S...

  2. libChEBI: an API for accessing the ChEBI database.

    Science.gov (United States)

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  3. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  4. IAU Meteor Data Center-the shower database: A status report

    Science.gov (United States)

    Jopek, Tadeusz Jan; Kaňuchová, Zuzana

    2017-09-01

    Currently, the meteor shower part of Meteor Data Center database includes: 112 established showers, 563 in the working list, among them 36 have the pro tempore status. The list of shower complexes contains 25 groups, 3 have established status and 1 has the pro tempore status. In the past three years, new meteor showers submitted to the MDC database were detected amongst the meteors observed by CAMS stations (Cameras for Allsky Meteor Surveillance), those included in the EDMOND (European viDeo MeteOr Network Database), those collected by the Japanese SonotaCo Network, recorded in the IMO (International Meteor Organization) database, observed by the Croatian Meteor Network and on the Southern Hemisphere by the SAAMER radar. At the XXIX General Assembly of the IAU in Honolulu, Hawaii in 2015, the names of 18 showers were officially accepted and moved to the list of established ones. Also, one shower already officially named (3/SIA the Southern iota Aquariids) was moved back to the working list of meteor showers. At the XXIX GA IAU the basic shower nomenclature rule was modified, the new formulation predicates ;The general rule is that a meteor shower (and a meteoroid stream) should be named after the constellation that contains the nearest star to the radiant point, using the possessive Latin form;. Over the last three years the MDC database was supplemented with the earlier published original data on meteor showers, which permitted verification of the correctness of the MDC data and extension of bibliographic information. Slowly but surely new database software options are implemented, and software bugs are corrected.

  5. TMT approach to observatory software development process

    Science.gov (United States)

    Buur, Hanne; Subramaniam, Annapurni; Gillies, Kim; Dumas, Christophe; Bhatia, Ravinder

    2016-07-01

    The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT's workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate

  6. Changes and challenges in the Software Engineering Laboratory

    Science.gov (United States)

    Pajerski, Rose

    1994-01-01

    Since 1976, the Software Engineering Laboratory (SEL) has been dedicated to understanding and improving the way in which one NASA organization, the Flight Dynamics Division (FDD), develops, maintains, and manages complex flight dynamics systems. The SEL is composed of three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation. During the past 18 years, the SEL's overall goal has remained the same: to improve the FDD's software products and processes in a measured manner. This requires that each development and maintenance effort be viewed, in part, as a SEL experiment which examines a specific technology or builds a model of interest for use on subsequent efforts. The SEL has undertaken many technology studies while developing operational support systems for numerous NASA spacecraft missions.

  7. Software System for the Calibration of X-Ray Measuring Instruments

    International Nuclear Information System (INIS)

    Gaytan-Gallardo, E.; Tovar-Munoz, V. M.; Cruz-Estrada, P.; Vergara-Martinez, F. J.; Rivero-Gutierrez, T.

    2006-01-01

    A software system that facilities the calibration of X-ray measuring instruments used in medical applications is presented. The Secondary Standard Dosimetry Laboratory (SSDL) of the Nuclear Research National Institute in Mexico (ININ in Spanish), supports activities concerning with ionizing radiations in medical area. One of these activities is the calibration of X-ray measuring instruments, in terms of air kerma or exposure by substitution method in an X-ray beam at a point where the rate has been determined by means of a standard ionization chamber. To automatize this process, a software system has been developed, the calibration system is composed by an X-ray unit, a Dynalizer IIIU X-ray meter by RADCAL, a commercial data acquisition card, the software system and the units to be tested and calibrated. A quality control plan has been applied in the development of the software system, ensuring that quality assurance procedures and standards are being followed

  8. Recording the LHCb data and software dependencies

    Science.gov (United States)

    Trisovic, Ana; Couturier, Ben; Gibson, Val; Jones, Chris

    2017-10-01

    In recent years awareness of the importance of preserving the experimental data and scientific software at CERN has been rising. To support this effort, we are presenting a novel approach to structure dependencies of the LHCb data and software to make it more accessible in the long-term future. In this paper, we detail the implementation of a graph database of these dependencies. We list the implications that can be deduced from the graph mining (such as a search for the legacy software), with emphasis on data preservation. Furthermore, we introduce a methodology of recreating the LHCb data, thus supporting reproducible research and data stewardship. Finally, we describe how this information is made available to the users on a web portal that promotes data and analysis preservation and good practise with analysis documentation.

  9. Composer: Authoring Tool for iTV Programs

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); R.M.R. Costa; L.F.G. Soares

    2008-01-01

    htmlabstractThis paper presents Composer, an authoring tool to help creating interactive TV programs for the Brazilian Terrestrial Digital TV System. In Composer, several abstractions are defined creating different document views (structural, temporal, layout and textual). One of these views, the

  10. Specdata: Automated Analysis Software for Broadband Spectra

    Science.gov (United States)

    Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.

    2017-06-01

    With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.

  11. Lessons Learned from resolving massive IPS database change for SPADES+

    International Nuclear Information System (INIS)

    Kim, Jin-Soo

    2016-01-01

    Safety Parameter Display and Evaluation System+ (SPADES+) was implemented to meet the requirements for Safety Parameter Display System (SPDS) which are related to TMI Action Plan requirements. SPADES+ monitors continuously the critical safety function during normal, abnormal, and emergency operation mode and generates the alarm output to the alarm server when the tolerance related to safety functions are not satisfied. The alarm algorithm for critical safety function is performed in the NSSS Application Software (NAPS) server of the Information Process System (IPS) and the calculation result will be displayed on the flat panel display (FPD) of the IPS. SPADES+ provides the critical variable to the control room operators to aid them in rapidly and reliable determining the safety status of the plant. Many database point ID names (518 points) were changed. POINT_ID is used in the programming source code, the related documents such as SDS and SRS, and Graphic database. To reduce human errors, computer program and office program’s Macro are used. Though the automatic methods are used for changing POINT_IDs, it takes lots of time to resolve for editing the change list except for making computerized solutions. In IPS, there are many more programs than SPADES+ and over 30,000 POINT_IDs are in IPS database. Changing POINT_IDs could be a burden to software engineers. In case of Ovation system database, there is the Alias field to prevent this kind of problem. The Alias is a kind of secondary key in database

  12. Fiscal 1998 research report. Construction model project of the human sensory database; 1998 nendo ningen kankaku database kochiku model jigyo seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This report summarizes the fiscal 1998 research result on construction of the human sensory database. The human sensory database for evaluating working environment was constructed on the basis of the measurement result on human sensory data (stress and fatigue) of 400 examinees at fields (transport field, control room and office) and in a laboratory. By using the newly developed standard measurement protocol for evaluating summer clothing (shirt, slacks and underwear), the database composed of the evaluation experiment results and the comparative experiment results on human physiological and sensory data of aged and young people was constructed. The database is featured by easy retrieval of various information concerned corresponding to requirements of tasks and use purposes. For evaluating the mass data with large time variation read corresponding to use purposes for every scene, the data detection support technique was adopted paying attention to physical and psychological variable phases, and mind and body events. A meaning of reaction and a hint for necessary measures are showed for every phase and event. (NEDO)

  13. COOL, LCG Conditions Database for the LHC Experiments Development and Deployment Status

    CERN Document Server

    Valassi, A; Clemencic, M; Pucciani, G; Schmidt, S A; Wache, M; CERN. Geneva. IT Department, DM

    2009-01-01

    The COOL project provides common software components and tools for the handling of the conditions data of the LHC experiments. It is part of the LCG Persistency Framework (PF), a broader project set up within the context of the LCG Application Area (AA) to devise common persistency solutions for the LHC experiments. COOL software development is the result of the collaboration between the CERN IT Department and ATLAS and LHCb, the two experiments that have chosen it as the basis of their conditions database infrastructure. COOL supports conditions data persistency using several relational technologies (Oracle, MySQL, SQLite and FroNTier), based on the CORAL Common Relational Abstraction Layer. For both experiments, Oracle is the backend used for the deployment of COOL database services at Tier0 and Tier1 sites of the LHC Computing Grid. While the development of new software functionalities is being frozen as LHC operations are ramping up, the main focus for the project in 2008 has shifted to performance optimi...

  14. Affective evolutionary music composition with MetaCompose

    DEFF Research Database (Denmark)

    Scirea, Marco; Togelius, Julian; Eklund, Peter

    2017-01-01

    This paper describes the MetaCompose music generator, a compositional, extensible framework for affective music composition. In this context ‘affective’ refers to the music generator’s ability to express emotional information. The main purpose of MetaCompose is to create music in real-time that can...

  15. DAD - Distributed Adamo Database system at Hermes

    International Nuclear Information System (INIS)

    Wander, W.; Dueren, M.; Ferstl, M.; Green, P.; Potterveld, D.; Welch, P.

    1996-01-01

    Software development for the HERMES experiment faces the challenges of many other experiments in modern High Energy Physics: Complex data structures and relationships have to be processed at high I/O rate. Experimental control and data analysis are done on a distributed environment of CPUs with various operating systems and requires access to different time dependent databases like calibration and geometry. Slow and experimental control have a need for flexible inter-process-communication. Program development is done in different programming languages where interfaces to the libraries should not restrict the capacities of the language. The needs of handling complex data structures are fulfilled by the ADAMO entity relationship model. Mixed language programming can be provided using the CFORTRAN package. DAD, the Distributed ADAMO Database library, was developed to provide the I/O and database functionality requirements. (author)

  16. The GIOD Project-Globally Interconnected Object Databases

    CERN Document Server

    Bunn, J J; Newman, H B; Wilkinson, R P

    2001-01-01

    The GIOD (Globally Interconnected Object Databases) Project, a joint effort between Caltech and CERN, funded by Hewlett Packard Corporation, has investigated the use of WAN-distributed Object Databases and Mass Storage systems for LHC data. A prototype small- scale LHC data analysis center has been constructed using computing resources at Caltechs Centre for advanced Computing Research (CACR). These resources include a 256 CPU HP Exemplar of ~4600 SPECfp95, a 600 TByte High Performance Storage System (HPSS), and local/wide area links based on OC3 ATM. Using the exemplar, a large number of fully simulated CMS events were produced, and used to populate an object database with a complete schema for raw, reconstructed and analysis objects. The reconstruction software used for this task was based on early codes developed in preparation for the current CMS reconstruction program, ORCA. (6 refs).

  17. Development of Farm Records Software

    Directory of Open Access Journals (Sweden)

    M. S. Abubakar

    2017-12-01

    Full Text Available Farm records are mostly manually kept on paper notebooks and folders where similar records are organized in one folder or spread sheet. These records are usually kept for many years therefore they becomes bulky and less organized. Consequently, it becomes difficult to search, update and tedious and time consuming to manage these records. This study was carried-out to overcome these problems associated with manual farm records keeping by developing user-friendly, easily accessible, reliable and secured software. The software was limited records keeping in crop production, livestock production, poultry production, employees, income and expenditure. The system was implemented using Java Server Faces (JSF for designing Graphical User Interface (GUI, Enterprises Java Beans (EJB for logic tier and MySQL database for storing farm records.

  18. Neurosyphilis in Anglo-American Composers and Jazz Musicians.

    Science.gov (United States)

    Breitenfeld, Darko; Kust, Davor; Breitenfeld, Tomislav; Prpić, Marin; Lucijanić, Marko; Zibar, Davor; Hostić, Vedran; Franceschi, Maja; Bolanča, Ante

    2017-09-01

    Syphilis is a sexually transmitted, systemic disease caused by the spirochete bacterium Treponema pallidum. The most common mechanism of transmission is sexual intercourse. Although there are several hypotheses, the exact origin of the disease remains unknown. Newly published evidence suggests that the hypothesis supporting the theory of the American origin of the disease is the valid one. Among 1500 analyzed pathographies of composers and musicians, data on ten Anglo-American composers and jazz musicians having suffered from neurosyphilis (tertiary stage of the disease) were extracted for this report. In this group of Anglo-American composers and musicians, most of them died from progressive paralysis while still in the creative phase of life. Additionally, diagnoses of eleven other famous neurosyphilitic composers, as well as basic biographic data on ten less known composers that died from neurosyphilis-progressive paralysis are also briefly mentioned. In conclusion, neurosyphilis can cause serious neurological damage, as well as permanent disability or death, preventing further work and skill improvement.

  19. FeelSound : Collaborative Composing of Acoustic Music

    NARCIS (Netherlands)

    Fikkert, Wim; Hakvoort, Michiel; van der Vet, Paul; Nijholt, Anton

    2009-01-01

    FeelSound is a multi-user application for collaboratively composing music in an entertaining way. Up to four composers can jointly create acoustic music on a top-projection multitouch sensitive table. The notes of an acoustic instrument are represented on a harmonic table and, by drawing shapes on

  20. Composing Interfering Abstract Protocols

    Science.gov (United States)

    2016-04-01

    Tecnologia , Universidade Nova de Lisboa, Caparica, Portugal. This document is a companion technical report of the paper, “Composing Interfering Abstract...a Ciência e Tecnologia (Portuguese Foundation for Science and Technology) through the Carnegie Mellon Portugal Program under grant SFRH / BD / 33765

  1. Developing a stone database for clinical practice.

    Science.gov (United States)

    Turney, Benjamin W; Noble, Jeremy G; Reynard, John M

    2011-09-01

    Our objective was to design an intranet-based database to streamline stone patient management and data collection. The system developers used a rapid development approach that removed the need for laborious and unnecessary documentation, instead focusing on producing a rapid prototype that could then be altered iteratively. By using open source development software and website best practice, the development cost was kept very low in comparison with traditional clinical applications. Information about each patient episode can be entered via a user-friendly interface. The bespoke electronic stone database removes the need for handwritten notes, dictation, and typing. From the database, files may be automatically generated for clinic letters, operation notes. and letters to family doctors. These may be printed or e-mailed from the database. Data may be easily exported for audits, coding, and research. Data collection remains central to medical practice, to improve patient safety, to analyze medical and surgical outcomes, and to evaluate emerging treatments. Establishing prospective data collection is crucial to this process. In the current era, we have the opportunity to embrace available technology to facilitate this process. The database template could be modified for use in other clinics. The database that we have designed helps to provide a modern and efficient clinical stone service.

  2. An Algorithm for Building an Electronic Database.

    Science.gov (United States)

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  3. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  4. Molecular formula and METLIN Personal Metabolite Database matching applied to the identification of compounds generated by LC/TOF-MS.

    Science.gov (United States)

    Sana, Theodore R; Roark, Joseph C; Li, Xiangdong; Waddell, Keith; Fischer, Steven M

    2008-09-01

    In an effort to simplify and streamline compound identification from metabolomics data generated by liquid chromatography time-of-flight mass spectrometry, we have created software for constructing Personalized Metabolite Databases with content from over 15,000 compounds pulled from the public METLIN database (http://metlin.scripps.edu/). Moreover, we have added extra functionalities to the database that (a) permit the addition of user-defined retention times as an orthogonal searchable parameter to complement accurate mass data; and (b) allow interfacing to separate software, a Molecular Formula Generator (MFG), that facilitates reliable interpretation of any database matches from the accurate mass spectral data. To test the utility of this identification strategy, we added retention times to a subset of masses in this database, representing a mixture of 78 synthetic urine standards. The synthetic mixture was analyzed and screened against this METLIN urine database, resulting in 46 accurate mass and retention time matches. Human urine samples were subsequently analyzed under the same analytical conditions and screened against this database. A total of 1387 ions were detected in human urine; 16 of these ions matched both accurate mass and retention time parameters for the 78 urine standards in the database. Another 374 had only an accurate mass match to the database, with 163 of those masses also having the highest MFG score. Furthermore, MFG calculated a formula for a further 849 ions that had no match to the database. Taken together, these results suggest that the METLIN Personal Metabolite database and MFG software offer a robust strategy for confirming the formula of database matches. In the event of no database match, it also suggests possible formulas that may be helpful in interpreting the experimental results.

  5. SLIMarray: Lightweight software for microarray facility management

    Directory of Open Access Journals (Sweden)

    Marzolf Bruz

    2006-10-01

    Full Text Available Abstract Background Microarray core facilities are commonplace in biological research organizations, and need systems for accurately tracking various logistical aspects of their operation. Although these different needs could be handled separately, an integrated management system provides benefits in organization, automation and reduction in errors. Results We present SLIMarray (System for Lab Information Management of Microarrays, an open source, modular database web application capable of managing microarray inventories, sample processing and usage charges. The software allows modular configuration and is well suited for further development, providing users the flexibility to adapt it to their needs. SLIMarray Lite, a version of the software that is especially easy to install and run, is also available. Conclusion SLIMarray addresses the previously unmet need for free and open source software for managing the logistics of a microarray core facility.

  6. How to automatically test and validate your database backup and recovery strategy

    International Nuclear Information System (INIS)

    Gaspar Aparicio, Ruben

    2011-01-01

    The major challenge we solve with this software project is the automated validation of backups sent to tape for Oracle databases. While Oracle Recovery Manager (RMAN) provides tools like 'restore validate', the real and only certain proof is a restore. This initial aim evolved to provide a recovery platform capable to cover more complex user cases, such as validations of backup strategy of Very Large DataBases (VLDB), and schema recoveries to cure logical errors or to provide the kind of database snapshots by means of exports.

  7. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text......The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...

  8. The COMPOSE Project

    Science.gov (United States)

    Balletta, P.; Biagini, M.; Gallinaro, G.; Vernucci, A.

    2003-07-01

    This paper provides an overview of the on-going project COMPOSE, an EC co-funded project aiming to define, specify and validate an innovative mobile-services scenario in support of travellers, and to demonstrate the effectiveness of the new proposed location-based value-added services. COMPOSE is supported by organisations belonging to numerous categories covering, as a whole, the entire value-chain of infomobility services provision to the final user. The project team comprises, in addition to the affiliations of the authors, also Teleatlas (NL), ARS T&TT (NL), Alcatel-Bell Space (B), Skysoft (P), Hitech Marketing (A) and MobileGis (IR). The paper describes the services that will be offered to users, encompassing both the pre-trip and the on-trip framework, presents the overall hybrid system architecture also including a via-satellite component based upon the Wideband-CDMA (W-CDMA) technique adopted in UMTS, discusses the access solutions envisaged for that component permitting multiple feeder-link stations to share the CDMA multiplex capacity by directly transmitting their codes to the satellite, and illustrates the results of some computer simulations intended to assess the performance of said access solutions, with regard to the effects of the inevitable up- link frequency errors and transponder non-linearity.

  9. [The development and evaluation of software to verify diagnostic accuracy].

    Science.gov (United States)

    Jensen, Rodrigo; de Moraes Lopes, Maria Helena Baena; Silveira, Paulo Sérgio Panse; Ortega, Neli Regina Siqueira

    2012-02-01

    This article describes the development and evaluation of software that verifies the accuracy of diagnoses made by nursing students. The software was based on a model that uses fuzzy logic concepts, including PERL, the MySQL database for Internet accessibility, and the NANDA-I 2007-2008 classification system. The software was evaluated in terms of its technical quality and usability through specific instruments. The activity proposed in the software involves four stages in which students establish the relationship values between nursing diagnoses, defining characteristics/risk factors and clinical cases. The relationship values determined by students are compared to those of specialists, generating performance scores for the students. In the evaluation, the software demonstrated satisfactory outcomes regarding the technical quality and, according to the students, helped in their learning and may become an educational tool to teach the process of nursing diagnosis.

  10. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  11. MyMolDB: a micromolecular database solution with open source and free components.

    Science.gov (United States)

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  12. A Time-Composable Operating System for the Patmos Processor

    DEFF Research Database (Denmark)

    Ziccardi, Marco; Schoeberl, Martin; Vardanega, Tullio

    2015-01-01

    -composable operating system, on top of a time-composable processor, facilitates incremental development, which is highly desirable for industry. This paper makes a twofold contribution. First, we present enhancements to the Patmos processor to allow achieving time composability at the operating system level. Second......, we extend an existing time-composable operating system, TiCOS, to make best use of advanced Patmos hardware features in the pursuit of time composability.......In the last couple of decades we have witnessed a steady growth in the complexity and widespread of real-time systems. In order to master the rising complexity in the timing behaviour of those systems, rightful attention has been given to the development of time-predictable computer architectures...

  13. What's New in Software? Hot New Tool: The Hypertext.

    Science.gov (United States)

    Hedley, Carolyn N.

    1989-01-01

    This article surveys recent developments in hypertext software, a highly interactive nonsequential reading/writing/database approach to research and teaching that allows paths to be created through related materials including text, graphics, video, and animation sources. Described are uses, advantages, and problems of hypertext. (PB)

  14. A software communication tool for the tele-ICU.

    Science.gov (United States)

    Pimintel, Denise M; Wei, Shang Heng; Odor, Alberto

    2013-01-01

    The Tele Intensive Care Unit (tele-ICU) supports a high volume, high acuity population of patients. There is a high-volume of incoming and outgoing calls, especially during the evening and night hours, through the tele-ICU hubs. The tele-ICU clinicians must be able to communicate effectively to team members in order to support the care of complex and critically ill patients while supporting and maintaining a standard to improve time to intervention. This study describes a software communication tool that will improve the time to intervention, over the paper-driven communication format presently used in the tele-ICU. The software provides a multi-relational database of message instances to mine information for evaluation and quality improvement for all entities that touch the tele-ICU. The software design incorporates years of critical care and software design experience combined with new skills acquired in an applied Health Informatics program. This software tool will function in the tele-ICU environment and perform as a front-end application that gathers, routes, and displays internal communication messages for intervention by priority and provider.

  15. Fiftieth Anniversary of the Cambridge Structural Database and Thirty Years of Its Use in Croatia

    Directory of Open Access Journals (Sweden)

    Kojić-Prodić B.

    2015-07-01

    Full Text Available This article is dedicated to the memory of Dr. F. H. Allen and the 50th anniversary of the Cambridge Crystallographic Data Centre (CCDC; the world-renowned centre for deposition and control of crystallographic data including atomic coordinates that define the three-dimensional structures of organic molecules and metal complexes containing organic ligands. The mission exposed at the web site (http://www.ccdc.cam.ac.uk is clearly stated: “The Cambridge Crystallographic Data Centre (CCDC is dedicated to the advancement of chemistry and crystallography for the public benefit through providing high quality information, software and services.” The Cambridge Structural Database (CSD, one among the first established electronic databases, nowadays is one of the most significant crystallographic databases in the world. In the International Year of Crystallography 2014, the CSD announced in December over 750,000 deposited structures. The use of the extensive and rapidly growing database needs support of sophisticated and efficient software for checking, searching, analysing, and visualising structural data. The seminal role of the CSD in researches related to crystallography, chemistry, materials science, solid state physics and chemistry, (biotechnology, life sciences, and pharmacology is widely known. The important issues of the CCDC are the accuracy of deposited data and development of software for checking the data. Therefore, the Crystallographic Information File (CIF is introduced as the standard text file format for representing crystallographic information. Among the most important software for users is ConQuest, which enables searching all the CSD information fields, and the web implementation WebCSD software. Mercury is available for visualisation of crystal structures and crystal morphology including intra- and intermolecular interactions with graph-set notations of hydrogen bonds, and analysis of geometrical parameters. The CCDC gives even

  16. The development of software and formation of a database on the main sources of environmental contamination in areas around nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Palitskaya, T.A.; Novikov, A.V. [Rosenergoatom Concern, Moscow (Russian Federation); Makeicheva, M.A.; Ivanov, E.A. [All-Russia Research Institute of Nuclear Power Plants, Moscow (Russian Federation)

    2004-07-01

    Providing of environmental safety control in the process of nuclear power plants (NPPs) operation, environmental protection and rational use of the natural resources is one of the most important tasks of the Rosenergoatom Concern. To ensure the environmental safety, trustworthy, complete and timely information is needed on the natural resources availability and condition, on the natural environment quality and its contamination level. The industrial environmental monitoring allows obtaining, processing and evaluating data for making environmentally acceptable and economically efficient decisions. The industrial environmental monitoring system at NPPs is formed taking into account both radiation and non-radiation factors of impact. Obtaining data on non-radiation factors of the NPP impact is provide by a complex of special observations carried out by NPP's environment protection services. The gained information is transmitted to the Rosenergoatom Concern and input to a database of the Environment Protection Division of the Concern Department of Radiation Safety, Environment Protection and Nuclear Materials Accounting. The database on the main sources of environmental contamination in the areas around NPPs will provide the high level of the environmental control authenticity, maintenance of the set standards, and also - automation of the most labor-consuming and frequently repeating types of operations. he applied software is being developed by specialists from the All-Russia Research Institute of Nuclear Power Plants on the basis of the database management system Microsoft SQL Server using VBA and Microsoft Access. The data will be transmitted through open communication channels. The geo-referenced digital mapping information, basing on the ArcGIS and MapInfo will be the main forms of output data presentation. The Federal authority bodies, their regional units and the Concern's sub-divisions involved in the environmental protection activities will be the

  17. Mathis software for controlling BCAM-based monitoring and alignment systems

    CERN Document Server

    Klumb, Francis; Kautzmann, Guillaume; CERN. Geneva. ATS Department

    2016-01-01

    The MATHIS Software (Monitoring and Alignment Tracking for HIE-Isolde Software) aims at providing 3D positions of physical components of the HIE-Isolde superconducting modules, accurately and permanently measured by well-designed networks of BCAM devices (Brandeis Camera Angle Monitoring). Although it is originally intended for the HIE-Isolde project, its architecture and its use cases have been extended and optimized for more general setups. Most of the configuration data are stored either within XML-formatted files or within databases. The adaptation of MATHIS for different BCAM monitoring systems therefore does not require any further code rewriting. Moreover, the software is fully cross-platform and can either be run on the specific Linux machines driving the accelerator electronic devices, or be used on independent Windows workstations as a stand-alone software. In the first case, the software mainly relies on FESA (Front End Software Architecture) which is an object-oriented real-time framework that ens...

  18. HITCal: a software tool for analysis of video head impulse test responses.

    Science.gov (United States)

    Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás

    2015-09-01

    The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).

  19. Application of NX Siemens PLM software in educational process in preparing students of engineering branch

    Science.gov (United States)

    Sadchikova, G. M.

    2017-01-01

    This article discusses the results of the introduction of computer-aided design NX by Siemens Plm Software to the classes of a higher education institution. The necessity of application of modern information technologies in teaching students of engineering profile and selection of a software product is substantiated. The author describes stages of the software module study in relation to some specific courses, considers the features of NX software, which require the creation of standard and unified product databases. The article also gives examples of research carried out by the students with the various software modules.

  20. Expert software for accident identification

    International Nuclear Information System (INIS)

    Dobnikar, M.; Nemec, T.; Muehleisen, A.

    2003-01-01

    Each type of an accident in a Nuclear Power Plant (NPP) causes immediately after the start of the accident variations of physical parameters that are typical for that type of the accident thus enabling its identification. Examples of these parameter are: decrease of reactor coolant system pressure, increase of radiation level in the containment, increase of pressure in the containment. An expert software enabling a fast preliminary identification of the type of the accident in Krsko NPP has been developed. As input data selected typical parameters from Emergency Response Data System (ERDS) of the Krsko NPP are used. Based on these parameters the expert software identifies the type of the accident and also provides the user with appropriate references (past analyses and other documentation of such an accident). The expert software is to be used as a support tool by an expert team that forms in case of an emergency at Slovenian Nuclear Safety Administration (SNSA) with the task to determine the cause of the accident, its most probable scenario and the source term. The expert software should provide initial identification of the event, while the final one is still to be made after appropriate assessment of the event by the expert group considering possibility of non-typical events, multiple causes, initial conditions, influences of operators' actions etc. The expert software can be also used as an educational/training tool and even as a simple database of available accident analyses. (author)

  1. A development and integration of the concentration database for relative method, k0 method and absolute method in instrumental neutron activation analysis using Microsoft Access

    International Nuclear Information System (INIS)

    Hoh Siew Sin

    2012-01-01

    Instrumental Neutron Activation Analysis (INAA) is offen used to determine and calculate the concentration of an element in the sample by the National University of Malaysia, especially students of Nuclear Science Program. The lack of a database service leads consumers to take longer time to calculate the concentration of an element in the sample. This is because we are more dependent on software that is developed by foreign researchers which are costly. To overcome this problem, a study has been carried out to build an INAA database software. The objective of this study is to build a database software that help the users of INAA in Relative Method and Absolute Method for calculating the element concentration in the sample using Microsoft Excel 2010 and Microsoft Access 2010. The study also integrates k 0 data, k 0 Concent and k 0 -Westcott to execute and complete the system. After the integration, a study was conducted to test the effectiveness of the database software by comparing the concentrations between the experiments and in the database. Triple Bare Monitor Zr-Au and Cr-Mo-Au were used in Abs-INAA as monitor to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration are the net peak area (N p ), the measurement time (t m ), the irradiation time (t irr ), k-factor (k), thermal to epithermal neutron flux ratio (f), the parameters of the neutron flux distribution epithermal (α) and detection efficiency (ε p ). For Com-INAA databases, reference material IAEA-375 Soil was used to calculate the concentration of elements in the sample. CRM, SRM are also used in this database. After the INAA database integration, a verification process was to examine the effectiveness of the Abs-INAA was carried out by comparing the sample concentration between the in database and the experiment. The result of the experimental concentration value of INAA database software performed with high accuracy and precision. ICC

  2. Automation of plasma-process fultext bibliography databases. An on-line data-collection, data-mining and data-input system

    International Nuclear Information System (INIS)

    Suzuki, Manabu; Pichl, Lukas; Murakami, Izumi; Kato, Takako; Sasaki, Akira

    2006-01-01

    Searching for relevant data, information retrieval, data extraction and data input are time- and resource-consuming activities in most data centers. Here we develop a Linux system automating the process in case of bibliography, abstract and fulltext databases. The present system is an open-source free-software low-cost solution that connects the target and provider databases in cyberspace through various web publishing formats. The abstract/fulltext relevance assessment is interfaced to external software modules. (author)

  3. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  4. Terverticillate penicillia studied by direct electrospray mass spectrometric profiling of crude extracts II. Database and identification

    DEFF Research Database (Denmark)

    Smedsgaard, Jørn

    1997-01-01

    A mass spectral database was built using standard instrument software from 678 electrospray mass spectra (mass profiles) from crude fungal extracts of terverticillate taxa within the genus Penicillium. The match factors calculated from searching all the mass profiles stored in the database were...

  5. Usage of data warehouse for analysing software's bugs

    Science.gov (United States)

    Živanov, Danijel; Krstićev, Danijela Boberić; Mirković, Duško

    2017-07-01

    We analysed the database schema of Bugzilla system and taking into account user's requirements for reporting, we presented a dimensional model for the data warehouse which will be used for reporting software defects. The idea proposed in this paper is not to throw away Bugzilla system because it certainly has many strengths, but to make integration of Bugzilla and the proposed data warehouse. Bugzilla would continue to be used for recording bugs that occur during the development and maintenance of software while the data warehouse would be used for storing data on bugs in an appropriate form, which is more suitable for analysis.

  6. Primary Numbers Database for ATLAS Detector Description Parameters

    CERN Document Server

    Vaniachine, A; Malon, D; Nevski, P; Wenaus, T

    2003-01-01

    We present the design and the status of the database for detector description parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters defining the detector geometry and digitization in simulations, as well as certain reconstruction parameters. Since the detailed ATLAS detector description needs more than 10,000 such parameters, a preferred solution is to have a single verified source for all these data. The database stores the data dictionary for each parameter collection object, providing schema evolution support for object-based retrieval of parameters. The same Primary Numbers are served to many different clients accessing the database: the ATLAS software framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers framework FADS/Goofy, the generator of XML output for detector description, and several end-user clients for interactive data navigation, including web-based browsers and ROOT. The choice of the MySQL database product for the implementation provides addition...

  7. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    Science.gov (United States)

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the

  8. Managing Written Directives: A Software Solution to Streamline Workflow.

    Science.gov (United States)

    Wagner, Robert H; Savir-Baruch, Bital; Gabriel, Medhat S; Halama, James R; Bova, Davide

    2017-06-01

    A written directive is required by the U.S. Nuclear Regulatory Commission for any use of 131 I above 1.11 MBq (30 μCi) and for patients receiving radiopharmaceutical therapy. This requirement has also been adopted and must be enforced by the agreement states. As the introduction of new radiopharmaceuticals increases therapeutic options in nuclear medicine, time spent on regulatory paperwork also increases. The pressure of managing these time-consuming regulatory requirements may heighten the potential for inaccurate or incomplete directive data and subsequent regulatory violations. To improve on the paper-trail method of directive management, we created a software tool using a Health Insurance Portability and Accountability Act (HIPAA)-compliant database. This software allows for secure data-sharing among physicians, technologists, and managers while saving time, reducing errors, and eliminating the possibility of loss and duplication. Methods: The software tool was developed using Visual Basic, which is part of the Visual Studio development environment for the Windows platform. Patient data are deposited in an Access database on a local HIPAA-compliant secure server or hard disk. Once a working version had been developed, it was installed at our institution and used to manage directives. Updates and modifications of the software were released regularly until no more significant problems were found with its operation. Results: The software has been used at our institution for over 2 y and has reliably kept track of all directives. All physicians and technologists use the software daily and find it superior to paper directives. They can retrieve active directives at any stage of completion, as well as completed directives. Conclusion: We have developed a software solution for the management of written directives that streamlines and structures the departmental workflow. This solution saves time, centralizes the information for all staff to share, and decreases

  9. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  10. Lessons Learned from resolving massive IPS database change for SPADES+

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin-Soo [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    Safety Parameter Display and Evaluation System+ (SPADES+) was implemented to meet the requirements for Safety Parameter Display System (SPDS) which are related to TMI Action Plan requirements. SPADES+ monitors continuously the critical safety function during normal, abnormal, and emergency operation mode and generates the alarm output to the alarm server when the tolerance related to safety functions are not satisfied. The alarm algorithm for critical safety function is performed in the NSSS Application Software (NAPS) server of the Information Process System (IPS) and the calculation result will be displayed on the flat panel display (FPD) of the IPS. SPADES+ provides the critical variable to the control room operators to aid them in rapidly and reliable determining the safety status of the plant. Many database point ID names (518 points) were changed. POINT{sub I}D is used in the programming source code, the related documents such as SDS and SRS, and Graphic database. To reduce human errors, computer program and office program’s Macro are used. Though the automatic methods are used for changing POINT{sub I}Ds, it takes lots of time to resolve for editing the change list except for making computerized solutions. In IPS, there are many more programs than SPADES+ and over 30,000 POINT{sub I}Ds are in IPS database. Changing POINT{sub I}Ds could be a burden to software engineers. In case of Ovation system database, there is the Alias field to prevent this kind of problem. The Alias is a kind of secondary key in database.

  11. A computer database system to calculate staff radiation doses and maintain records

    International Nuclear Information System (INIS)

    Clewer, P.

    1985-01-01

    A database has been produced to record the personal dose records of all employees monitored for radiation exposure in the Wessex Health Region. Currently there are more than 2000 personnel in 115 departments but the capacity of the database allows for expansion. The computer is interfaced to a densitometer for film badge reading. The hardware used by the database, which is based on a popular microcomputer, is described, as are the various programs that make up the software. The advantages over the manual card index system that it replaces are discussed. (author)

  12. A Survey of Automatic Code Generating Software

    Science.gov (United States)

    1988-09-01

    ID that matches a name in the user’s database file and must then match the password specified for that user. A successful login will allow access to...Software Corp. P.O. Box 10089 Chicago, IL 60610 312-743-2755 PRO-2 Prodata, Inc. 4477 Emerald Suite C-100 Boise, ID 83706 208-342-6878 67 GTP Allen, Emerson

  13. Asset management: integrated software optimizes production performance

    Energy Technology Data Exchange (ETDEWEB)

    Polczer, S.

    1998-06-01

    Two new multi-dimensional databases, which expand the `row and column` concept of spreadsheets into multiple categories of data called dimensions, are described. These integrated software packages provide the foundation for industry players such as Poco Petroleum Ltd and Numac Energy Inc to gain a competitive advantage, by overhauling their respective data collection and retrieval systems to allow for timely cost analysis and financial reporting. Energy Warehouse, an on-line analytical processing product marketed by SysGold Ltd, is one of the software products described. It gathers various sources of information, allows advanced searches and generates reports previously unavailable in other conventional financial accounting systems. The second product discussed - the Canadian Upstream Energy System (CUES) - is an on-line analytical processing system developed by Oracle Corporation and Calgary-based Applied Terravision Systems (ATS) Inc. CUES combines Oracle`s universal data server and software development tools with ATS`s upstream financial, land, geotechnical and production applications. The software also allows for optimization of facilities, analysis of production efficiencies and comparison of performance against industry standards.

  14. Surveillance Analysis Computer System (SACS): Software requirements specification (SRS). Revision 2

    International Nuclear Information System (INIS)

    Glasscock, J.A.

    1995-01-01

    This document is the primary document establishing requirements for the Surveillance Analysis Computer System (SACS) database, an Impact Level 3Q system. SACS stores information on tank temperatures, surface levels, and interstitial liquid levels. This information is retrieved by the customer through a PC-based interface and is then available to a number of other software tools. The software requirements specification (SRS) describes the system requirements for the SACS Project, and follows the Standard Engineering Practices (WHC-CM-6-1), Software Practices (WHC-CM-3-10) and Quality Assurance (WHC-CM-4-2, QR 19.0) policies

  15. cPath: open source software for collecting, storing, and querying biological pathways

    Directory of Open Access Journals (Sweden)

    Gross Benjamin E

    2006-11-01

    Full Text Available Abstract Background Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. Results We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. Conclusion cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling.

  16. Two-dimensional arbitrarily shaped acoustic cloaks composed of homogeneous parts

    Science.gov (United States)

    Li, Qi; Vipperman, Jeffrey S.

    2017-10-01

    Acoustic cloaking is an important application of acoustic metamaterials. Although the topic has received much attention, there are a number of areas where contributions are needed. In this paper, a design method for producing acoustic cloaks with arbitrary shapes that are composed of homogeneous parts is presented. The cloak is divided into sections, each of which, in turn, is further divided into two parts, followed by the application of transformation acoustics to derive the required properties for cloaking. With the proposed mapping relations, the properties of each part of the cloak are anisotropic but homogeneous, which can be realized using two alternating layers of homogeneous and isotropic materials. A hexagonal and an irregular cloak are presented as design examples. The full wave simulations using COMSOL Multiphysics finite element software show that the cloaks function well at reducing reflections and shadows. The variation of the cloak properties is investigated as a function of three important geometric parameters used in the transformations. A balance can be found between cloaking performance and materials properties that are physically realizable.

  17. Software architecture and engineering for patient records: current and future.

    Science.gov (United States)

    Weng, Chunhua; Levine, Betty A; Mun, Seong K

    2009-05-01

    During the "The National Forum on the Future of the Defense Health Information System," a track focusing on "Systems Architecture and Software Engineering" included eight presenters. These presenters identified three key areas of interest in this field, which include the need for open enterprise architecture and a federated database design, net centrality based on service-oriented architecture, and the need for focus on software usability and reusability. The eight panelists provided recommendations related to the suitability of service-oriented architecture and the enabling technologies of grid computing and Web 2.0 for building health services research centers and federated data warehouses to facilitate large-scale collaborative health care and research. Finally, they discussed the need to leverage industry best practices for software engineering to facilitate rapid software development, testing, and deployment.

  18. Ground test accelerator control system software

    International Nuclear Information System (INIS)

    Burczyk, L.; Dalesio, R.; Dingler, R.; Hill, J.; Howell, J.A.; Kerstiens, D.; King, R.; Kozubal, A.; Little, C.; Martz, V.; Rothrock, R.; Sutton, J.

    1988-01-01

    This paper reports on the GTA control system that provides an environment in which the automation of a state-of-the-art accelerator can be developed. It makes use of commercially available computers, workstations, computer networks, industrial 110 equipment, and software. This system has built-in supervisory control (like most accelerator control systems), tools to support continuous control (like the process control industry), and sequential control for automatic start-up and fault recovery (like few other accelerator control systems). Several software tools support these levels of control: a real-time operating system (VxWorks) with a real-time kernel (VRTX), a configuration database, a sequencer, and a graphics editor. VxWorks supports multitasking, fast context-switching, and preemptive scheduling. VxWorks/VRTX is a network-based development environment specifically designed to work in partnership with the UNIX operating system. A data base provides the interface to the accelerator components. It consists of a run time library and a database configuration and editing tool. A sequencer initiates and controls the operation of all sequence programs (expressed as state programs). A graphics editor gives the user the ability to create color graphic displays showing the state of the machine in either text or graphics form

  19. The Ulster Cycle: Cultural Significance for Irish Composers

    Directory of Open Access Journals (Sweden)

    Angela Goff

    2017-10-01

    Full Text Available For more than three hundred years, Irish composers have engaged with tales from early Irish saga-literature which comprises four main series: Mythological, Ulster and Fenian cycles as well as the Cycle of Kings. This literary corpus dates from 600–1200 CE and is amongst the oldest in Europe. The fragmented history of the literature reveals a continuity of tradition in that the ancient sagas evolved from the oral Irish tradition, were gradually recorded in Irish, and kept alive in modern times through translation into the English language. The timelessness and social impact of these sagas, centuries after they were documented, resonate with Irish composers through the identification of local features and/or universal themes of redemption, triumph or tragedy depicted in the literature. The focus here is on sagas from the Ulster Cycle as they have been most celebrated by Irish composers; the majority of which have been composed since Thomas Kinsella’s successful translation of the Táin Bó Cuailnge in 1969. How the composers chose to embrace the Irish past lies in each composer’s execution of the peculiar local and universal themes exhibited in the sagas. The aim of this article is to initiate an interdisciplinary discussion of the cultural significance of this literary corpus for Irish composers by exploring an area of Irish musicological discourse that has not been hitherto documented. A brief literary background to the Ulster Cycle leads to a discussion of what prompted the composers to engage with Ulster Cycle themes at a particular time in their respective careers. An exploration of the various stylistic features employed in selected works sheds light on the cultural ideologies that prevailed in Ireland at the time of their respective composition.

  20. Materials Inventory Database for the Light Water Reactor Sustainability Program

    Energy Technology Data Exchange (ETDEWEB)

    Kazi Ahmed; Shannon M. Bragg-Sitton

    2013-08-01

    Scientific research involves the purchasing, processing, characterization, and fabrication of many sample materials. The history of such materials can become complicated over their lifetime – materials might be cut into pieces or moved to various storage locations, for example. A database with built-in functions to track these kinds of processes facilitates well-organized research. The Material Inventory Database Accounting System (MIDAS) is an easy-to-use tracking and reference system for such items. The Light Water Reactor Sustainability Program (LWRS), which seeks to advance the long-term reliability and productivity of existing nuclear reactors in the United States through multiple research pathways, proposed MIDAS as an efficient way to organize and track all items used in its research. The database software ensures traceability of all items used in research using built-in functions which can emulate actions on tracked items – fabrication, processing, splitting, and more – by performing operations on the data. MIDAS can recover and display the complete history of any item as a simple report. To ensure the database functions suitably for the organization of research, it was developed alongside a specific experiment to test accident tolerant nuclear fuel cladding under the LWRS Advanced Light Water Reactor Nuclear Fuels Pathway. MIDAS kept track of materials used in this experiment from receipt at the laboratory through all processes, test conduct and, ultimately, post-test analysis. By the end of this process, the database proved to be right tool for this program. The database software will help LWRS more efficiently conduct research experiments, from simple characterization tests to in-reactor experiments. Furthermore, MIDAS is a universal tool that any other research team could use to organize their material inventory.

  1. eSciMart: Web Platform for Scientific Software Marketplace

    Science.gov (United States)

    Kryukov, A. P.; Demichev, A. P.

    2016-10-01

    In this paper we suggest a design of a web marketplace where users of scientific application software and databases, presented in the form of web services, as well as their providers will have presence simultaneously. The model, which will be the basis for the web marketplace is close to the customer-to-customer (C2C) model, which has been successfully used, for example, on the auction sites such as eBay (ebay.com). Unlike the classical model of C2C the suggested marketplace focuses on application software in the form of web services, and standardization of API through which application software will be integrated into the web marketplace. A prototype of such a platform, entitled eSciMart, is currently being developed at SINP MSU.

  2. Relational databases for SSC design and control

    International Nuclear Information System (INIS)

    Barr, E.; Peggs, S.; Saltmarsh, C.

    1989-01-01

    Most people agree that a database is A Good Thing, but there is much confusion in the jargon used, and in what jobs a database management system and its peripheral software can and cannot do. During the life cycle of an enormous project like the SSC, from conceptual and theoretical design, through research and development, to construction, commissioning and operation, an enormous amount of data will be generated. Some of these data, originating in the early parts of the project, will be needed during commissioning or operation, many years in the future. Two of these pressing data management needs-from the magnet research and industrialization programs and the lattice design-have prompted work on understanding and adapting commercial database practices for scientific projects. Modern relational database management systems (rDBMS's) cope naturally with a large proportion of the requirements of data structures, like the SSC database structure built for the superconduction cable supplies, uses, and properties. This application is similar to the commercial applications for which these database systems were developed. The SSC application has further requirements not immediately satisfied by the commercial systems. These derive from the diversity of the data structures to be managed, the changing emphases and uses during the project lifetime, and the large amount of scientific data processing to be expected. 4 refs., 5 figs

  3. Validation and application of a physics database for fast reactor fuel cycle analysis

    International Nuclear Information System (INIS)

    McKnight, R.D.; Stillman, J.A.; Toppel, B.J.; Khalil, H.S.

    1994-01-01

    An effort has been made to automate the execution of fast reactor fuel cycle analysis, using EBR-II as a demonstration vehicle, and to validate the analysis results for application to the IFR closed fuel cycle demonstration at EBR-II and its fuel cycle facility. This effort has included: (1) the application of the standard ANL depletion codes to perform core-follow analyses for an extensive series of EBR-II runs, (2) incorporation of the EBR-II data into a physics database, (3) development and verification of software to update, maintain and verify the database files, (4) development and validation of fuel cycle models and methodology, (5) development and verification of software which utilizes this physics database to automate the application of the ANL depletion codes, methods and models to perform the core-follow analysis, and (6) validation studies of the ANL depletion codes and of their application in support of anticipated near-term operations in EBR-II and the Fuel Cycle Facility. Results of the validation tests indicate the physics database and associated analysis codes and procedures are adequate to predict required quantities in support of early phases of FCF operations

  4. Development of piping support structure design software based on PDMS

    International Nuclear Information System (INIS)

    Tang Yongtao; Guan Hui; Su Rongfu; Huang Wei; Mao Huihui

    2014-01-01

    In order to enhance the efficiency of nuclear power process system piping support design, the veracity of interface with support, piping and anchor, and decrease the clash between supports and other disciplines, developed piping support structure design software NPHS based on PDMS independently. That achieved the seamless integration of PDMS and NPHS by method of embedded development, reduce the size of program code, improve the running efficiency; That predigested the 3D modeling and information storage for support parts, that increased the support database opening and maintenance using the special mechanism and configuration of database. The support modeling efficiency due to setting of the connection key point of support parts is improved. Practices in several real nuclear power projects proved that NPHS software is provided with such outstanding performances: quick running, strong stability, accurate data, easy to operate and maintain, and output results satisfied the engineering requirements. (authors)

  5. Database Translator (DATALATOR) for Integrated Exploitation

    Science.gov (United States)

    2010-10-31

    via the Internet to Fortune 1000 clients including Mercedes Benz , Procter & Gamble, and HP. I look forward to hearing of your successful proposal and working with you to build a successful business. Sincerely, ...testing the DATALATOR experimental prototype (IRL 4) designed to demonstrate its core functions based on Next (icneration Software technology . Die...sources, but is not directly dependent on the platform such as database technology or data formats. In other words, there is a clear air gap between

  6. Oracle Database 11gR2 Performance Tuning Cookbook

    CERN Document Server

    Fiorillo, Ciro

    2012-01-01

    In this book you will find both examples and theoretical concepts covered. Every recipe is based on a script/procedure explained step-by-step, with screenshots, while theoretical concepts are explained in the context of the recipe, to explain why a solution performs better than another. This book is aimed at software developers, software and data architects, and DBAs who are using or are planning to use the Oracle Database, who have some experience and want to solve performance problems faster and in a rigorous way. If you are an architect who wants to design better applications, a DBA who is

  7. The JET level-1 software

    International Nuclear Information System (INIS)

    McCullen, P.A.; Farthing, J.W.

    1998-01-01

    The complex nature of the JET machine requires a large amount of control parameter preparation, selection and validation before a pulse may be started. Level-1 is defined as the centralized, cross-subsystem control of JET. Before it was introduced over 10 years ago, the Session Leader (SL) who is responsible for specifying the parameter settings for a JET pulse, had virtually no software available to help him except for a simple editor used for the creation of control waveforms. Most of the required parameter settings were calculated by hand and then passed on either verbally or via hand-written forms. These parameters were then set by a large number of people - Local Unit Responsible Officers (LUROs) and CODAS Duty Officers (CDOs) using a wide selection of dedicated software. At this time the Engineer in Charge (EiC) would largely depend on the LUROs to inform him that conditions were ready. He never set control parameters personally and had little or no software available to him to see what many of the settings were. The first implementation of Level-1 software went some way towards improving the task of pulse schedule preparation in that the SL could specify his requirements via a computer interface and store them in a database for later use. At that time the maximum number of parameters that could be handled was 500. (author)

  8. Migration check tool: automatic plan verification following treatment management systems upgrade and database migration.

    Science.gov (United States)

    Hadley, Scott W; White, Dale; Chen, Xiaoping; Moran, Jean M; Keranen, Wayne M

    2013-11-04

    Software upgrades of the treatment management system (TMS) sometimes require that all data be migrated from one version of the database to another. It is necessary to verify that the data are correctly migrated to assure patient safety. It is impossible to verify by hand the thousands of parameters that go into each patient's radiation therapy treatment plan. Repeating pretreatment QA is costly, time-consuming, and may be inadequate in detecting errors that are introduced during the migration. In this work we investigate the use of an automatic Plan Comparison Tool to verify that plan data have been correctly migrated to a new version of a TMS database from an older version. We developed software to query and compare treatment plans between different versions of the TMS. The same plan in the two TMS systems are translated into an XML schema. A plan comparison module takes the two XML schemas as input and reports any differences in parameters between the two versions of the same plan by applying a schema mapping. A console application is used to query the database to obtain a list of active or in-preparation plans to be tested. It then runs in batch mode to compare all the plans, and a report of success or failure of the comparison is saved for review. This software tool was used as part of software upgrade and database migration from Varian's Aria 8.9 to Aria 11 TMS. Parameters were compared for 358 treatment plans in 89 minutes. This direct comparison of all plan parameters in the migrated TMS against the previous TMS surpasses current QA methods that relied on repeating pretreatment QA measurements or labor-intensive and fallible hand comparisons.

  9. Active in-database processing to support ambient assisted living systems.

    Science.gov (United States)

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  10. Mars Global Digital Dune Database; MC-1

    Science.gov (United States)

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  11. Technical report on the surface reconstruction of stacked contours by using the commercial software

    Science.gov (United States)

    Shin, Dong Sun; Chung, Min Suk; Hwang, Sung Bae; Park, Jin Seo

    2007-03-01

    After drawing and stacking contours of a structure, which is identified in the serially sectioned images, three-dimensional (3D) image can be made by surface reconstruction. Usually, software is composed for the surface reconstruction. In order to compose the software, medical doctors have to acquire the help of computer engineers. So in this research, surface reconstruction of stacked contours was tried by using commercial software. The purpose of this research is to enable medical doctors to perform surface reconstruction to make 3D images by themselves. The materials of this research were 996 anatomic images (1 mm intervals) of left lower limb, which were made by serial sectioning of a cadaver. On the Adobe Photoshop, contours of 114 anatomic structures were drawn, which were exported to Adobe Illustrator files. On the Maya, contours of each anatomic structure were stacked. On the Rhino, superoinferior lines were drawn along all stacked contours to fill quadrangular surfaces between contours. On the Maya, the contours were deleted. 3D images of 114 anatomic structures were assembled with their original locations preserved. With the surface reconstruction technique, developed in this research, medical doctors themselves could make 3D images of the serially sectioned images such as CTs and MRIs.

  12. Pegasys: software for executing and integrating analyses of biological sequences

    Directory of Open Access Journals (Sweden)

    Lett Drew

    2004-04-01

    Full Text Available Abstract Background We present Pegasys – a flexible, modular and customizable software system that facilitates the execution and data integration from heterogeneous biological sequence analysis tools. Results The Pegasys system includes numerous tools for pair-wise and multiple sequence alignment, ab initio gene prediction, RNA gene detection, masking repetitive sequences in genomic DNA as well as filters for database formatting and processing raw output from various analysis tools. We introduce a novel data structure for creating workflows of sequence analyses and a unified data model to store its results. The software allows users to dynamically create analysis workflows at run-time by manipulating a graphical user interface. All non-serial dependent analyses are executed in parallel on a compute cluster for efficiency of data generation. The uniform data model and backend relational database management system of Pegasys allow for results of heterogeneous programs included in the workflow to be integrated and exported into General Feature Format for further analyses in GFF-dependent tools, or GAME XML for import into the Apollo genome editor. The modularity of the design allows for new tools to be added to the system with little programmer overhead. The database application programming interface allows programmatic access to the data stored in the backend through SQL queries. Conclusions The Pegasys system enables biologists and bioinformaticians to create and manage sequence analysis workflows. The software is released under the Open Source GNU General Public License. All source code and documentation is available for download at http://bioinformatics.ubc.ca/pegasys/.

  13. CLOUDCLOUD : general-purpose instrument monitoring and data managing software

    Science.gov (United States)

    Dias, António; Amorim, António; Tomé, António

    2016-04-01

    An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are

  14. Eprints Institutional Repository Software: A Review

    Directory of Open Access Journals (Sweden)

    Mike R. Beazley

    2011-01-01

    Full Text Available Setting up an institutional repository (IR can be a daunting task. There are many software packages out there, some commercial, some open source, all of which offer different features and functionality. This article will provide some thoughts about one of these software packages: Eprints. Eprints was one of the first IR software packages to appear and has been available for 10 years. It is under continual development by its creators at the University of Southampton and the current version is v3.2.3. Eprints is open-source, meaning that anyone can download and make use of the software for free and the software can be modified however the user likes. This presents clear advantages for institutions will smaller budgets and also for institutions that have programmers on staff. Eprints requires some additional software to run: Linux, Apache, MySQL, and Perl. This software is all open-source and already present on the servers of many institutions. There is now a version of Eprints that will run on Windows servers as well, which will make the adoption of Eprints even easier for some. In brief, Eprints is an excellent choice for any institution looking to get an IR up and running quickly and easily. Installation is straightforward as is the initial configuration. Once the IR is up and running, users may upload documents and provide the necessary metadata for the records by filling out a simple web form. Embargoes on published documents are handled elegantly by the software, and the software links to the SHERPA/RoMEO database so authors can easily verify their rights regarding IR submissions. Eprints has some drawbacks, which will be discussed later in the review, but on the whole it is easy to recommend to anyone looking to start an IR. However, It is less clear that an institution with an existing IR based on another software package should migrate to Eprints.

  15. On the Problem of Attribute Selection for Software Cost Estimation: Input Backward Elimination Using Artificial Neural Networks

    OpenAIRE

    Papatheocharous , Efi; Andreou , Andreas S.

    2010-01-01

    International audience; Many parameters affect the cost evolution of software projects. In the area of software cost estimation and project management the main challenge is to understand and quantify the effect of these parameters, or 'cost drivers', on the effort expended to develop software systems. This paper aims at investigating the effect of cost attributes on software development effort using empirical databases of completed projects and building Artificial Neural Network (ANN) models ...

  16. Gas Hydrate Research Database and Web Dissemination Channel

    Energy Technology Data Exchange (ETDEWEB)

    Micheal Frenkel; Kenneth Kroenlein; V Diky; R.D. Chirico; A. Kazakow; C.D. Muzny; M. Frenkel

    2009-09-30

    To facilitate advances in application of technologies pertaining to gas hydrates, a United States database containing experimentally-derived information about those materials was developed. The Clathrate Hydrate Physical Property Database (NIST Standard Reference Database {number_sign} 156) was developed by the TRC Group at NIST in Boulder, Colorado paralleling a highly-successful database of thermodynamic properties of molecular pure compounds and their mixtures and in association with an international effort on the part of CODATA to aid in international data sharing. Development and population of this database relied on the development of three components of information-processing infrastructure: (1) guided data capture (GDC) software designed to convert data and metadata into a well-organized, electronic format, (2) a relational data storage facility to accommodate all types of numerical and metadata within the scope of the project, and (3) a gas hydrate markup language (GHML) developed to standardize data communications between 'data producers' and 'data users'. Having developed the appropriate data storage and communication technologies, a web-based interface for both the new Clathrate Hydrate Physical Property Database, as well as Scientific Results from the Mallik 2002 Gas Hydrate Production Research Well Program was developed and deployed at http://gashydrates.nist.gov.

  17. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  18. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  19. A dependability modeling of software under memory faults for digital system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, J. G.; Seong, P. H.

    1997-01-01

    In this work, an analytic approach to the dependability of software in the operational phase is suggested with special attention to the hardware fault effects on the software behavior : The hardware faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory which represents the software with graph composed of nodes and arcs. Through proper transformation, the graph can be reduced to a simple two-node graph and the software reliability is derived from this graph. Using this model, we predict the reliability of an application software in the digital system (ILS) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. We also found that the effects of the hardware faults on the software failure should be considered for predicting the software dependability accurately in operation phase, especially for the software which is executed frequently. This modeling method is particularly attractive for the medium size programs such as the microprocessor-based nuclear safety logic program. (author)

  20. An Area Efficient Composed CORDIC Architecture

    Directory of Open Access Journals (Sweden)

    AGUIRRE-RAMOS, F.

    2014-05-01

    Full Text Available This article presents a composed architecture for the CORDIC algorithm. CORDIC is a widely used technique to calculate basic trigonometric functions using only additions and shifts. This composed architecture combines an initial coarse stage to approximate sine and cosine functions, and a second stage to finely tune those values while CORDIC operates on rotation mode. Both stages contribute to shorten the algorithmic steps required to fully execute the CORDIC algorithm. For comparison purposes, the Xilinx CORDIC logiCORE IP and previously reported research are used. The proposed architecture aims at reducing hardware resources usage as its key objective.

  1. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  2. EMEN2: an object oriented database and electronic lab notebook.

    Science.gov (United States)

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J

    2013-02-01

    Transmission electron microscopy and associated methods, such as single particle analysis, two-dimensional crystallography, helical reconstruction, and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source.

  3. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  4. A database for operation logging of the KEK photon factory

    International Nuclear Information System (INIS)

    Pak, C.O.

    1990-01-01

    A prototype database for operation logging of the KEK Photon Factory storage ring has been constructed and tested. This paper describes the basic design of the operation logging system and its performance. Front-end control computers gather various data concerning the operation of the storage ring, and transfer them to a large general-purpose computer through a token-ring network. We have adopted a relational database system so as to save large amounts of data under daily operation. An interactive software tool was developed to retrieve data and to make graphic representations easily. (orig.)

  5. The IVTANTHERMO-Online database for thermodynamic properties of individual substances with web interface

    Science.gov (United States)

    Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.

    2018-01-01

    The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.

  6. Review of the Educational Software Evaluation Forms and Scales

    Directory of Open Access Journals (Sweden)

    Ahmet ARSLAN

    2016-12-01

    Full Text Available The main purpose of this study is to review existing evaluation forms and scales that have been prepared for educational software evaluation. In addition to this purpose, the study aims to provide insight and guidance for future studies in this context. In total, forty-two studies that including evaluation forms and scales have been taken into consideration. “Educational software evaluation”, “Software evaluation”, “Educational software evaluation forms/scales” were searched as keywords in the: “Education Resources Information Centre (ERIC”, “Marmara University e-Library”, “National Thesis Center” and “Science Direct” databases. Twenty-nine of them have met the review selection criteria and been evaluated. There is an increase in the number of evaluation tools between 2006 – 2010. However, it was noticed that there is no sufficient number of evaluation tools targeting “educational games”. It was concluded that reliability and validity studies are very important part of developing educational software evaluation tools and this is a matter that should be considered in future studies.

  7. TOPDOM: database of conservatively located domains and motifs in proteins.

    Science.gov (United States)

    Varga, Julia; Dobson, László; Tusnády, Gábor E

    2016-09-01

    The TOPDOM database-originally created as a collection of domains and motifs located consistently on the same side of the membranes in α-helical transmembrane proteins-has been updated and extended by taking into consideration consistently localized domains and motifs in globular proteins, too. By taking advantage of the recently developed CCTOP algorithm to determine the type of a protein and predict topology in case of transmembrane proteins, and by applying a thorough search for domains and motifs as well as utilizing the most up-to-date version of all source databases, we managed to reach a 6-fold increase in the size of the whole database and a 2-fold increase in the number of transmembrane proteins. TOPDOM database is available at http://topdom.enzim.hu The webpage utilizes the common Apache, PHP5 and MySQL software to provide the user interface for accessing and searching the database. The database itself is generated on a high performance computer. tusnady.gabor@ttk.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  8. Development of Web-based Software for Sorption Database

    International Nuclear Information System (INIS)

    Han, Byoung Sub; Lee, Jae Min; Seo, Min Seok; Kim, Dong Keon

    2009-08-01

    Sorption studies of radionuclides are important parts of research on radioactive waste disposal which is commonly faced in most countries where nuclear programs (power production, a variety of peaceful applications, and research) are implemented. The Sorption Database (DB) plays a very important role in the safety assessment of the radioactive waste disposal. The Sorption DB which is opened externally can be used as reference material of establishing a national policy by improving and changing the pre-developed Sorption program to be web-based. From the industrial point of view, if the Sorption DB is opened to the outside, the safety-related confidence can be achieved for nuclear industry. As the information of Sorption DB is opened, not only credibility can be provided to the administration, local governments and nearby residents, but also input of the collected information can be achieved by online. In addition, the reference material and external awareness/reliability about the domestic level of the Sorption DB management system and the current state can be achieved internationally. In order to provide the information of Sorption DB to users in more efficient way, the analysis and complement of management and search capability for the existing Sorption DB program have been performed and web-based management system has been built to provide services to users. In addition, by applying statistical techniques, it has been designed and implemented to display the accuracy and error of the information

  9. The impact of new accelerator control software on LEP performance

    International Nuclear Information System (INIS)

    Bailey, R.; Belk, A.; Collier, P.; Lamont, M.; Rigk, G. de; Tarrant, M.

    1993-01-01

    After the first year of running LEP, it became apparent that a new generation of application software would be required for efficient long term exploitation of the accelerator. In response to this need, a suite of accelerator control software has been developed, which is new both in style and functionality. During 1992 this software has been extensively used for driving LEP in many different operational modes, which include several different optics, polarisation runs at different energies and 8 bunch operation with Pretzels. The software has performed well and has undoubtedly enhanced the efficiency of accelerator operations. In particular the turnaround time has been significantly reduced, giving an increase of around 20% in the integrated luminosity for the year. Furthermore the software has made the accelerator accessible to less experienced operators. After outlining the development strategy, the overall functionality and performance of the software is discussed, with particular emphasis on improvements in operating efficiency. Some evaluation of the performance and reliability of ORACLE as an on-line database is also given

  10. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  11. NIST/Sandia/ICDD Electron Diffraction Database: A Database for Phase Identification by Electron Diffraction.

    Science.gov (United States)

    Carr, M J; Chambers, W F; Melgaard, D; Himes, V L; Stalick, J K; Mighell, A D

    1989-01-01

    A new database containing crystallographic and chemical information designed especially for application to electron diffraction search/match and related problems has been developed. The new database was derived from two well-established x-ray diffraction databases, the JCPDS Powder Diffraction File and NBS CRYSTAL DATA, and incorporates 2 years of experience with an earlier version. It contains 71,142 entries, with space group and unit cell data for 59,612 of those. Unit cell and space group information were used, where available, to calculate patterns consisting of all allowed reflections with d -spacings greater than 0.8 A for ~ 59,000 of the entries. Calculated patterns are used in the database in preference to experimental x-ray data when both are available, since experimental x-ray data sometimes omits high d -spacing data which falls at low diffraction angles. Intensity data are not given when calculated spacings are used. A search scheme using chemistry and r -spacing (reciprocal d -spacing) has been developed. Other potentially searchable data in this new database include space group, Pearson symbol, unit cell edge lengths, reduced cell edge length, and reduced cell volume. Compound and/or mineral names, formulas, and journal references are included in the output, as well as pointers to corresponding entries in NBS CRYSTAL DATA and the Powder Diffraction File where more complete information may be obtained. Atom positions are not given. Rudimentary search software has been written to implement a chemistry and r -spacing bit map search. With typical data, a full search through ~ 71,000 compounds takes 10~20 seconds on a PDP 11/23-RL02 system.

  12. QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.

    Directory of Open Access Journals (Sweden)

    Sang-Kyu Jung

    Full Text Available Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.

  13. A REVIEW OF ESTIMATION OF SOFTWARE PRODUCTS DEVELOPMENT COSTS

    Directory of Open Access Journals (Sweden)

    Edin Osmanbegović

    2017-01-01

    Full Text Available In the modern business and management of business processes, the standardization of procedures allows the creation of added value, increasing competitiveness and success in the business of an organization. Evaluation of the budget for software development is crucial to the success of an IT project, because the inability to make a realistic assessment leads to inadequate project plans, customer dissatisfaction, poor quality of software products, and reduced profits. In order to minimize such situations, making accurate and reliable software cost estimation should be carried out at all stages of the project life cycle. Although hundreds of research articles focusing on the application of different methods of budget estimates of the software product have been published so far, there is no comprehensive review of the current situation or review of research trends in the budget estimates of the software product. This paper aims to create a framework for estimation of costs of development of software products by providing an overview of the most influential researchers, the most influential articles published in the WoS database, the most used keywords for searching the articles, as well as a review of the estimation techniques used in budget estimates of the software product.

  14. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  15. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    Science.gov (United States)

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  16. OCL2Trigger: Deriving active mechanisms for relational databases using Model-Driven Architecture

    OpenAIRE

    Al-Jumaily, Harith T.; Cuadra, Dolores; Martínez, Paloma

    2008-01-01

    16 pages, 10 figures.-- Issue title: "Best papers from the 2007 Australian Software Engineering Conference (ASWEC 2007), Melbourne, Australia, April 10-13, 2007, Australian Software Engineering Conference 2007". Transforming integrity constraints into active rules or triggers for verifying database consistency produces a serious and complex problem related to real time behaviour that must be considered for any implementation. Our main contribution to this work is to provide a complete appr...

  17. A META-COMPOSITE SOFTWARE DEVELOPMENT APPROACH FOR TRANSLATIONAL RESEARCH

    Science.gov (United States)

    Sadasivam, Rajani S.; Tanik, Murat M.

    2013-01-01

    Translational researchers conduct research in a highly data-intensive and continuously changing environment and need to use multiple, disparate tools to achieve their goals. These researchers would greatly benefit from meta-composite software development or the ability to continuously compose and recompose tools together in response to their ever-changing needs. However, the available tools are largely disconnected, and current software approaches are inefficient and ineffective in their support for meta-composite software development. Building on the composite services development approach, the de facto standard for developing integrated software systems, we propose a concept-map and agent-based meta-composite software development approach. A crucial step in composite services development is the modeling of users’ needs as processes, which can then be specified in an executable format for system composition. We have two key innovations. First, our approach allows researchers (who understand their needs best) instead of technicians to take a leadership role in the development of process models, reducing inefficiencies and errors. A second innovation is that our approach also allows for modeling of complex user interactions as part of the process, overcoming the technical limitations of current tools. We demonstrate the feasibility of our approach using a real-world translational research use case. We also present results of usability studies evaluating our approach for future refinements. PMID:23504436

  18. A meta-composite software development approach for translational research.

    Science.gov (United States)

    Sadasivam, Rajani S; Tanik, Murat M

    2013-06-01

    Translational researchers conduct research in a highly data-intensive and continuously changing environment and need to use multiple, disparate tools to achieve their goals. These researchers would greatly benefit from meta-composite software development or the ability to continuously compose and recompose tools together in response to their ever-changing needs. However, the available tools are largely disconnected, and current software approaches are inefficient and ineffective in their support for meta-composite software development. Building on the composite services development approach, the de facto standard for developing integrated software systems, we propose a concept-map and agent-based meta-composite software development approach. A crucial step in composite services development is the modeling of users' needs as processes, which can then be specified in an executable format for system composition. We have two key innovations. First, our approach allows researchers (who understand their needs best) instead of technicians to take a leadership role in the development of process models, reducing inefficiencies and errors. A second innovation is that our approach also allows for modeling of complex user interactions as part of the process, overcoming the technical limitations of current tools. We demonstrate the feasibility of our approach using a real-world translational research use case. We also present results of usability studies evaluating our approach for future refinements.

  19. Propuesta de un proceso de enseñanza-aprendizaje para la asignatura Diseño de Software como proceso de software

    Directory of Open Access Journals (Sweden)

    Lund, María Inés

    2014-04-01

    Full Text Available La cátedra Diseño de Software se dicta actualmente en 4º año de las carreras del Departamento de Informática de la Facultad de Ciencias Exactas, Físicas y Naturales (FCEFN de la Universidad Nacional de San Juan (UNSJ. Esta materia se enfoca principalmente al Diseño Orientado a Objetos (DOO, brindando conceptos y conocimientos desarrollados en forma teórica y con un fuerte componente práctico, de todos los diagramas de modelado de software que provee el Lenguaje de Modelado Unificado (UML, con el fin de comprender acabadamente el objetivo que se persigue con cada uno de ellos y en qué casos es conveniente o útil aplicarlos. El presente trabajo se sustenta de la experiencia adquirida en la práctica aplicada para la enseñanza de DOO, utilizando UML para el modelado, donde las actividades prácticas abarcan desde el análisis hasta llegar a una propuesta de diseño de implementación. Se presenta un modelo de proceso de enseñanza aprendizaje, como proceso de software, y los artefactos utilizados para guiar al alumno en la resolución de un problema de desarrollo de software específico, utilizando para su especificación el lenguaje de metamodelado de procesos SPEM 2.0 y para generar el modelado del proceso de software la herramienta Eclipse Process Framework Composer (EPFC.

  20. The Research Potential of the Electronic OED Database at the University of Waterloo: A Case Study.

    Science.gov (United States)

    Berg, Donna Lee

    1991-01-01

    Discusses the history and structure of the online database of the second edition of the Oxford English Dictionary (OED) and the software tools developed at the University of Waterloo to manipulate the unusually complex database. Four sample searches that indicate some types of problems that might be encountered are appended. (DB)