WorldWideScience

Sample records for nnw database file

  1. Study of neoclassical transport in LHD plasmas by applying the DCOM/NNW neoclassical transport database

    International Nuclear Information System (INIS)

    Wakasa, Arimitsu; Oikawa, Shun-ichi; Murakami, Sadayoshi

    2008-01-01

    In helical systems, neoclassical transport is one of the important issues in addition to anomalous transport, because of a strong temperature dependency of heat conductivity and an important role in the radial electric field determination. Therefore, the development of a reliable tool for the neoclassical transport analysis is necessary for the transport analysis in Large Helical Device (LHD). We have developed a neoclassical transport database for LHD plasmas, DCOM/NNW, where mono-energetic diffusion coefficients are evaluated by the Monte Carlo method, and the diffusion coefficient database is constructed by a neural network technique. The input parameters of the database are the collision frequency, radial electric field, minor radius, and configuration parameters (R axis , beta value, etc). In this paper, database construction including the plasma beta is investigated. A relatively large Shafranov shift occurs in the finite beta LHD plasma, and the magnetic field configuration becomes complex leading to rapid increase in the number of the Fourier modes in Boozer coordinates. DCOM/NNW can evaluate neoclassical transport accurately even in such a configuration with a large number of Fourier modes. The developed DCOM/NNW database is applied to a finite-beta LHD plasma, and the plasma parameter dependences of neoclassical transport coefficients and the ambipolar radial electric field are investigated. (author)

  2. Flat Files - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... Data file File name: jsnp_flat_files File URL: ftp://ftp.biosciencedbc.jp/archiv...his Database Database Description Download License Update History of This Database Site Policy | Contact Us Flat Files - JSNP | LSDB Archive ...

  3. Data vaults: a database welcome to scientific file repositories

    NARCIS (Netherlands)

    Ivanova, M.; Kargın, Y.; Kersten, M.; Manegold, S.; Zhang, Y.; Datcu, M.; Espinoza Molina, D.

    2013-01-01

    Efficient management and exploration of high-volume scientific file repositories have become pivotal for advancement in science. We propose to demonstrate the Data Vault, an extension of the database system architecture that transparently opens scientific file repositories for efficient in-database

  4. Data Vaults: a Database Welcome to Scientific File Repositories

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); Y. Kargin (Yagiz); M.L. Kersten (Martin); S. Manegold (Stefan); Y. Zhang (Ying); M. Datcu (Mihai); D. Espinoza Molina

    2013-01-01

    textabstractEfficient management and exploration of high-volume scientific file repositories have become pivotal for advancement in science. We propose to demonstrate the Data Vault, an extension of the database system architecture that transparently opens scientific file repositories for efficient

  5. Image files - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_gel_image.zip File size: 38.5 MB Simple search URL - Data ... License Update History of This Database Site Policy | Contact Us Image files - RPD | LSDB Archive ...

  6. HCUP State Emergency Department Databases (SEDD) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Emergency Department Databases (SEDD) contain the universe of emergency department visits in participating States. Restricted access data files are...

  7. Quality Assurance Procedures for ModCat Database Code Files

    Energy Technology Data Exchange (ETDEWEB)

    Siciliano, Edward R.; Devanathan, Ram; Guillen, Zoe C.; Kouzes, Richard T.; Schweppe, John E.

    2014-04-01

    The Quality Assurance procedures used for the initial phase of the Model Catalog Project were developed to attain two objectives, referred to as “basic functionality” and “visualization.” To ensure the Monte Carlo N-Particle model input files posted into the ModCat database meet those goals, all models considered as candidates for the database are tested, revised, and re-tested.

  8. The image database management system of teaching file using personal computer

    International Nuclear Information System (INIS)

    Shin, M. J.; Kim, G. W.; Chun, T. J.; Ahn, W. H.; Baik, S. K.; Choi, H. Y.; Kim, B. G.

    1995-01-01

    For the systemic management and easy using of teaching file in radiology department, the authors tried to do the setup of a database management system of teaching file using personal computer. We used a personal computer (IBM PC compatible, 486DX2) including a image capture card(Window vision, Dooin Elect, Seoul, Korea) and video camera recorder (8mm, CCD-TR105, Sony, Tokyo, Japan) for the acquisition and storage of images. We developed the database program by using Foxpro for Window 2.6(Microsoft, Seattle, USA) executed in the Window 3.1 (Microsoft, Seattle, USA). Each datum consisted of hospital number, name, sex, age, examination date, keyword, radiologic examination modalities, final diagnosis, radiologic findings, references and representative images. The images were acquired and stored as bitmap format (8 bitmap, 540 X 390 ∼ 545 X 414, 256 gray scale) and displayed on the 17 inch-flat monitor(1024 X 768, Samtron, Seoul, Korea). Without special devices, the images acquisition and storage could be done on the reading viewbox, simply. The image quality on the computer's monitor was less than the one of original film on the viewbox, but generally the characteristics of each lesions could be differentiated. Easy retrieval of data was possible for the purpose of teaching file system. Without high cost appliances, we could consummate the image database system of teaching file using personal computer with relatively inexpensive method

  9. Database structure and file layout of Nuclear Power Plant Database. Database for design information on Light Water Reactors in Japan

    International Nuclear Information System (INIS)

    Yamamoto, Nobuo; Izumi, Fumio.

    1995-12-01

    The Nuclear Power Plant Database (PPD) has been developed at the Japan Atomic Energy Research Institute (JAERI) to provide plant design information on domestic Light Water Reactors (LWRs) to be used for nuclear safety research and so forth. This database can run on the main frame computer in the JAERI Tokai Establishment. The PPD contains the information on the plant design concepts, the numbers, capacities, materials, structures and types of equipment and components, etc, based on the safety analysis reports of the domestic LWRs. This report describes the details of the PPD focusing on the database structure and layout of data files so that the users can utilize it efficiently. (author)

  10. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  11. Characterisation of bio-aerosols during dust storm period in N-NW India

    Science.gov (United States)

    Yadav, Sudesh; Chauhan, M. S.; Sharma, Anupam

    Bio-investigations for pollen and spores were performed on dry free-fall dust and PM 10 aerosol samples, collected from three different locations separated by a distance of 600 km, situated in dust storm hit region of N-NW India. Presence of pollen of trees namely Prosopis ( Prosopis juliflora and Prosopis cinearia), Acacia, Syzygium, Pinus, Cedrus, Holoptelea and shrubs namely Ziziphus, Ricinus, Ephedra and members of Fabaceae, Oleaceae families was recorded but with varying proportions in the samples of different locations. Poaceae, Chenopodiaceae/Amaranthaceae, Caryophyllaceae, Brassicaceae and Cyperaceae (sedges) were some of the herb pollen identified in the samples. Among the fungal spores Nigrospora was seen in almost all samples. Nigrospora is a well known allergen and causes health problems. The concentration of trees and shrubs increases in the windward direction just as the climate changes from hot arid to semiarid. The higher frequency of grasses (Poaceae) or herbs could either be a result of the presence of these herbs in the sampling area and hence the higher production of pollen/spores or due to the resuspension from the exposed surface by the high-intensity winds. But we cannot ascertain the exact process at this stage. The overall similarity in the pollen and spore assemblage in our dust samples indicates a common connection or source(s) to the dust in this region. Presence of the pollen of the species of Himalayan origin in our entire samples strongly point towards a Himalayan connection, could be direct or indirect, to the bioaerosols and hence dust in N-NW India. In order to understand the transport path and processes involved therein, present study needs further extension with more number of samples and with reference to meteorological parameters.

  12. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    Science.gov (United States)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available

  13. GRIP Database original data - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...PDB GRIP Database original data Data detail Data name GRIP Database original data DOI 10....18908/lsdba.nbdc01665-006 Description of data contents GRIP Database original data It consists of data table...s and sequences. Data file File name: gripdb_original_data.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...e Database Description Download License Update History of This Database Site Policy | Contact Us GRIP Database original data - GRIPDB | LSDB Archive ...

  14. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  15. Compression of Index Term Dictionary in an Inverted-File-Oriented Database: Some Effective Algorithms.

    Science.gov (United States)

    Wisniewski, Janusz L.

    1986-01-01

    Discussion of a new method of index term dictionary compression in an inverted-file-oriented database highlights a technique of word coding, which generates short fixed-length codes obtained from the index terms themselves by analysis of monogram and bigram statistical distributions. Substantial savings in communication channel utilization are…

  16. Neutron metrology file NMF-90. An integrated database for performing neutron spectrum adjustment calculations

    International Nuclear Information System (INIS)

    Kocherov, N.P.

    1996-01-01

    The Neutron Metrology File NMF-90 is an integrated database for performing neutron spectrum adjustment (unfolding) calculations. It contains 4 different adjustment codes, the dosimetry reaction cross-section library IRDF-90/NMF-G with covariances files, 6 input data sets for reactor benchmark neutron fields and a number of utility codes for processing and plotting the input and output data. The package consists of 9 PC HD diskettes and manuals for the codes. It is distributed by the Nuclear Data Section of the IAEA on request free of charge. About 10 MB of diskspace is needed to install and run a typical reactor neutron dosimetry unfolding problem. (author). 8 refs

  17. Experience with a run file archive using database technology

    International Nuclear Information System (INIS)

    Nixdorf, U.

    1993-12-01

    High Energy Physics experiments are known for their production of large amounts of data. Even small projects may have to manage several Giga Byte of event information. One possible solution for the management of this data is to use today's technology to archive the raw data files in tertiary storage and build on-line catalogs which reference interesting data. This approach has been taken by the Gammas, Electrons and Muons (GEM) Collaboration for their evaluation of muon chamber technologies at the Superconducting Super Collider Laboratory (SSCL). Several technologies were installed and tested during a 6 month period. Events produced were first recorded in the UNIX filesystem of the data acquisition system and then migrated to the Physics Detector Simulation Facility (PDSF) for long term storage. The software system makes use of a commercial relational database management system (SYBASE) and the Data Management System (DMS), a tape archival system developed at the SSCL. The components are distributed among several machines inside and outside PDSF. A Motif-based graphical user interface (GUI) enables physicists to retrieve interesting runs from the archive using the on-line database catalog

  18. Creating databases for biological information: an introduction.

    Science.gov (United States)

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  19. CD-ROM for the PGAA-IAEA database

    International Nuclear Information System (INIS)

    Firestone, R.B.; Zerking, V.

    2007-01-01

    Both the database of prompt gamma rays from slow neutron capture for elemental analysis and the results of this CRP are available on the accompanying CD-ROM. The file index.html is the home page for the CD-ROM, and provides links to the following information: (a) The CRP - General information, papers and reports relevant to this CRP. (b) The PGAA-IAEA database viewer - An interactive program to display and search the PGAA database by isotope, energy or capture cross-section. (c) The Database of Prompt Gamma Rays from Slow Neutron Capture for Elemental Analysis - This report. (d) The PGAA database files - Adopted PGAA database and associated files in EXCEL, PDF and Text formats. The archival databases by Lone et al. and by Reedy and Frankle are also available. (e) The Evaluated Gamma-Ray Activation File (EGAF) - The adopted PGAA database in ENSDF format. Data can be viewed with the Isotope Explorer 2.2 ENSDF Viewer. (f) The PGAA database evaluation - ENSDF format versions of the adopted PGAA database, and the Budapest and ENSDF isotopic input files. Decay scheme balance and statistical analysis summaries are provided. (g) The Isotope Explorer 2.2 ENSDF viewer - Windows software for viewing the level scheme drawings and tables provided in ENSDF format. The complete ENSDF database is included, as of December 2002. The databases and viewers are discussed in greater detail in the following sections

  20. Database for waste glass composition and properties

    International Nuclear Information System (INIS)

    Peters, R.D.; Chapman, C.C.; Mendel, J.E.; Williams, C.G.

    1993-09-01

    A database of waste glass composition and properties, called PNL Waste Glass Database, has been developed. The source of data is published literature and files from projects funded by the US Department of Energy. The glass data have been organized into categories and corresponding data files have been prepared. These categories are glass chemical composition, thermal properties, leaching data, waste composition, glass radionuclide composition and crystallinity data. The data files are compatible with commercial database software. Glass compositions are linked to properties across the various files using a unique glass code. Programs have been written in database software language to permit searches and retrievals of data. The database provides easy access to the vast quantities of glass compositions and properties that have been studied. It will be a tool for researchers and others investigating vitrification and glass waste forms

  1. Web-mediated database for internet-based dental radiology teaching files constructed by 5th-year undergraduate students

    International Nuclear Information System (INIS)

    Kito, Shinji; Wakasugi-Sato, Nao; Matsumoto-Takeda, Shinobu; Oda, Masafumi; Tanaka, Tatsurou; Fukai, Yasuhiro; Tokitsu, Takatoshi; Morimoto, Yasuhiro

    2009-01-01

    To provide oral healthcare for patients of all ages, dental welfare environments and technical aspects of dentistry have evolved and developed and dental education must also diversify. Student-centered voluntary education and establishment of a life-long self-learning environment are becoming increasingly important in the changing world of dental education. In this article, we introduce a new process for the construction of a web-mediated database containing internet-based teaching files on the normal radiological anatomy of panoramic radiographs and CT images of the oral and maxillofacial regions, as well as a system for the delivery of visual learning materials through an intra-faculty local network. This process was developed by our 5th-year undergraduate students. Animated CT scan images were produced using Macintosh Iphoto and Imovie animation software. Normal anatomical images of panoramic radiographs and CT scans were produced using Adobe Illustrator CS and Adobe Photoshop CS. The web database was constructed using Macromedia Dreamweaver MX and Microsoft Internet Explorer. This project was the basis of our participation in the Student Clinician Research Program (SCRP). At Kyushu Dental College, we developed a new series of teaching files on the web. Uploading these teaching files to the internet allowed many individuals to access the information. Viewers can easily select the area of study that they wish to examine. These processes suggest that our laboratory practice is a useful tool for promoting students' motivation and improving life-long self learning in dental radiology. We expect that many medical and dental students, practitioners and patients will be able to use our teaching files to learn about the normal radiological anatomy of the oral and maxillofacial regions.(author)

  2. Reference - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ailable. Data file File name: place_reference.zip File URL: ftp://ftp.biosciencedbc.jp/archive/place/LATEST/...ber About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Reference - PLACE | LSDB Archive ...

  3. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; The ATLAS collaboration; Barberis, Dario; Gallas, Elizabeth; Rybkin, Grigori; Rinaldi, Lorenzo; Aperio Bella, Ludovica; Buttinger, William

    2017-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  4. First Use of LHC Run 3 Conditions Database Infrastructure for Auxiliary Data Files in ATLAS

    CERN Document Server

    Aperio Bella, Ludovica; The ATLAS collaboration

    2016-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF data is effectively read by the software as binary objects, makes this class of data ideal for testing the proposed Run 3 Conditions data infrastructure now in development. This paper will describe this implementation as well as describe the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  5. Exon - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ontents Exons in variants Data file File name: astra_exon.zip File URL: ftp://ftp.biosciencedbc.jp/archive/a... About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Exon - ASTRA | LSDB Archive ...

  6. About Libraries - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ormat text file. Data file File name: acest_library.zip File URL: ftp://ftp.biosciencedbc.jp/archive/acest/L...ATEST/acest_library.zip File size: 2KB Simple search URL http://togodb.biosciencedbc.jp/togodb/view/archiv...s Database Database Description Download License Update History of This Database Site Policy | Contact Us About Libraries - AcEST | LSDB Archive ...

  7. Movie collection - TogoTV | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ols. Data file File name: movie File URL: ftp://ftp.biosciencedbc.jp/archive/togotv/movie/ File size: 200 GB...ata entries 1169 entries - About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Movie collection - TogoTV | LSDB Archive ...

  8. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  9. Mapping data - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...tional Rice Genome Sequencing Project (IRGSP) Data file File name: kome_mapping_data.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...(Transcriptional Unit) About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Mapping data - KOME | LSDB Archive ...

  10. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  11. CAGE peaks - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...E Data file File name: CAGE_peaks File URL: ftp://ftp.biosciencedbc.jp/archive/fantom... This Database Database Description Download License Update History of This Database Site Policy | Contact Us CAGE peaks - FANTOM5 | LSDB Archive ...

  12. Protocol - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e version) File URL: ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_protocol_jp.zip File size: 535 KB Fil...e name: rpd_protocol_en.zip (English version) File URL: ftp://ftp.biosciencedbc.jp/archiv...tabase Database Description Download License Update History of This Database Site Policy | Contact Us Protocol - RPD | LSDB Archive ...

  13. Main - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ntents List of datasets Data file File name: kome_main.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome...ase Database Description Download License Update History of This Database Site Policy | Contact Us Main - KOME | LSDB Archive ...

  14. BRC - MicrobeDB.jp | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...table). Data file File name: brc.tar.gz File URL: ftp://ftp.biosciencedbc.jp/archive/microbedb/LATEST/brc.ta...rains in JCM. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us BRC - MicrobeDB.jp | LSDB Archive ...

  15. SRA - MicrobeDB.jp | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e following table). Data file File name: sra.tar.gz File URL: ftp://ftp.biosciencedbc.jp/archive/microbedb/L...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us SRA - MicrobeDB.jp | LSDB Archive ...

  16. Menggabungkan Beberapa File Dalam SPSS/PC

    Directory of Open Access Journals (Sweden)

    Syahrudji Naseh

    2012-09-01

    Full Text Available Pada dasamya piranti lunak komputer dapat dibagi ke dalam lima kelompok besar yaitu pengolah kata, spreadsheet database, statistika dan animasi/desktop. Masing-masing mempunyai kelebihan dan kekurangannya. Piranti lunak dBase 111+ yang merupakan piranti lunak paling populer dalam"database", hanya dapat menampung 128 variabel saja. Oleh karenanya pada suatu kuesioner yang besar seperti Susenas (Survei Sosial Ekonomi Nasional atau SKRT (Survei Kesehatan Rumah Tangga, datanya tidak dapat dijadikan satu "file". Biasanya dipecah menjadi banyak "file", umpamanya fileldbf, file2.dbf dan seterusnya.Masalahnya adalah bagaimana menggabung beberapa variabel yang ada di file1.dbf engan beberapa variabel yang ada di file5.dbf? Tulisan ini mencoba membahas masalah tersebut

  17. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  18. FileMaker Pro 11 The Missing Manual

    CERN Document Server

    Prosser, Susan

    2010-01-01

    This hands-on, friendly guide shows you how to harness FileMaker's power to make your information work for you. With a few mouse clicks, the FileMaker Pro 11 database helps you create and print corporate reports, manage a mailing list, or run your entire business. FileMaker Pro 11: The Missing Manual helps you get started, build your database, and produce results, whether you're running a business, pursuing a hobby, or planning your retirement. It's a thorough, accessible guide for new, non-technical users, as well as those with more experience. Start up: Get your first database up and runnin

  19. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  20. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  1. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  2. PREIMS - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...Targeted Proteins Research Program (TPRP). Data file File name: at_atlas_preims.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...base Database Description Download License Update History of This Database Site Policy | Contact Us PREIMS - AT Atlas | LSDB Archive ...

  3. 75 FR 4689 - Electronic Tariff Filings

    Science.gov (United States)

    2010-01-29

    ... elements ``are required to properly identify the nature of the tariff filing, organize the tariff database... (or other pleading) and the Type of Filing code chosen will be resolved in favor of the Type of Filing...'s wish expressed in its transmittal letter or in other pleadings, the Commission may not review a...

  4. IAEA/NDS requirements related to database software

    International Nuclear Information System (INIS)

    Pronyaev, V.; Zerkin, V.

    2001-01-01

    Full text: The Nuclear Data Section of the IAEA disseminates data to the NDS users through Internet or on CD-ROMs and diskettes. OSU Web-server on DEC Alpha with Open VMS and Oracle/DEC DBMS provides via CGI scripts and FORTRAN retrieval programs access to the main nuclear databases supported by the networks of Nuclear Reactions Data Centres and Nuclear Structure and Decay Data Centres (CINDA, EXFOR, ENDF, NSR, ENSDF). For Web-access to data from other libraries and files, hyper-links to the files stored in ASCII text or other formats are used. Databases on CD-ROM are usually provided with some retrieval system. They are distributed in the run-time mode and comply with all license requirements for software used in their development. Although major development work is done now at the PC with MS-Windows and Linux, NDS may not at present, due to some institutional conditions, use these platforms for organization of the Web access to the data. Starting the end of 1999, the NDS, in co-operation with other data centers, began to work out the strategy of migration of main network nuclear data bases onto platforms other than DEC Alpha/Open VMS/DBMS. Because the different co-operating centers have their own preferences for hardware and software, the requirement to provide maximum platform independence for nuclear databases is the most important and desirable feature. This requirement determined some standards for the nuclear database software development. Taking into account the present state and future development, these standards can be formulated as follows: 1. All numerical data (experimental, evaluated, recommended values and their uncertainties) prepared for inclusion in the IAEA/NDS nuclear database should be submitted in the form of the ASCII text files and will be kept at NDS as a master file. 2. Databases with complex structure should be submitted in the form of the files with standard SQL statements describing all its components. All extensions of standard SQL

  5. Morb - n. petrology and geochemistry of the metagabbro of Rio Olivares NNW Sector of Manizales (Caldas)

    International Nuclear Information System (INIS)

    Toro Toro, Luz Mary; Hincapie Jaramillo, Gustavo; Ossa Meza, Cesar Augusto

    2010-01-01

    The Rio Olivares metagabbro is a body of igneous intrusive rocks that outcrops along the Rio Olivares at NNW of the Manizales city (Department of Caldas, Colombia). This igneous body is defined by series of centimetro metric-sized faulted slivers within the western sector of Quebradagrande complex. Petrographic analyses show rocks with cumulus and isotropic gabbroic textures. The primary minerals are: calcium plagioclase and clinopyroxene, secondary minerals are: Amphibole, chlorite, epidote, plagioclase and less quartz, carbonate and occasionally opaque minerals. According to geochemical distribution of major elements, those rocks were generated from fractional crystallization of unique magma showing a typical tendency of tholeiitic series. Taking into account the behavior of trace elements in geotectonic discrimination diagrams; they were generated in an ocean floor setting and their sources coming from an n-morb segment in the upper mantle. REE patterns normalized with respect to chondrite, show relatively homogeneous patterns, flats and enriched up to 10 times compared to the typical n-morb. These rocks are part of the oceanic basement of the early cretaceous Quebradagrande complex, and they are affected by my ionitization and ocean floor metamorphism.

  6. ORF information - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_orf_infomation.zip File size: 526 KB Simple s...ut This Database Database Description Download License Update History of This Database Site Policy | Contact Us ORF information - KOME | LSDB Archive ...

  7. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  8. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1983-01-01

    Database management techniques are applied to automating the setup of operating parameters of a heavy-ion accelerator used in nuclear physics experiments. Data files consist of ion-beam attributes, the interconnection assignments of the numerous power supplies and magnetic elements that steer the ions' path through the system, the data values that represent the electrical currents supplied by the power supplies, as well as the positions of motors and status of mechanical actuators. The database is relational and permits searching on ranges of any subset of the ion-beam attributes. A file selected from the database is used by the control software to replicate the ion beam conditions by adjusting the physical elements in a continuous manner

  9. Information of the markers in each chromosome - RGP caps | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available . Comment Comment Image file name 1 Image file name 1 of gel electrophoresis Imag...e file name 2 Image file name 2 of gel electrophoresis Image file name 3 Image file name 3 of gel electrophoresis... Image file name 4 Image file name 4 of gel electrophoresis About This Database Database Description Do

  10. Image File - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ption of data contents Network diagrams (in PNG format) for each project. One project has one pathway file o...List Contact us TP Atlas Image File Data detail Data name Image File DOI 10.18908/lsdba.nbdc01161-004 Descri

  11. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  12. PSCID List - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...t.zip File URL: ftp://ftp.biosciencedbc.jp/archive/pscdb/LATEST/pscdb_pscid_list.zip File size: 24.4 KB Simp...nd-binding sites About This Database Database Description Download License Update History of This Database Site Policy | Contact Us PSCID List - PSCDB | LSDB Archive ...

  13. Phenome data - High-sugar stress - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...s.zip File size: 90KB File name: Original file: High-sugar_stress.xls File URL: ftp://ftp.biosciencedbc.jp/archive/dgby/LATEST/Hi... Center for Protein Sequences) About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Phenome data - High-sugar stress - DGBY | LSDB Archive ... ...List Contact us DGBY Phenome data - High-sugar stress Data detail Data name Phenome data - High-sugar stress

  14. High School and Beyond: Twins and Siblings' File Users' Manual, User's Manual for Teacher Comment File, Friends File Users' Manual.

    Science.gov (United States)

    National Center for Education Statistics (ED), Washington, DC.

    These three users' manuals are for specific files of the High School and Beyond Study, a national longitudinal study of high school sophomores and seniors in 1980. The three files are computerized databases that are available on magnetic tape. As one component of base year data collection, information identifying twins, triplets, and some non-twin…

  15. YAC clone information - RGP physicalmap | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available 08/lsdba.nbdc00318-06-002 Description of data contents YAC clones selected with DNA markers Data file File name: rgp_physical...map_yac_clones.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rgp-physicalmap/LATEST/rgp_physical...sciencedbc.jp/togodb/view/rgp_physicalmap_yac_clones#en Data acquisition method YAC clones selected with RGP...rom. No. Chromosome number Region Region number Physical map image The file name of rice physical map Order ...bout This Database Database Description Download License Update History of This Database Site Policy | Contact Us YAC clone information - RGP physicalmap | LSDB Archive ...

  16. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    Science.gov (United States)

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  17. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  18. Hanford Site technical baseline database. Revision 1

    International Nuclear Information System (INIS)

    Porter, P.E.

    1995-01-01

    This report lists the Hanford specific files (Table 1) that make up the Hanford Site Technical Baseline Database. Table 2 includes the delta files that delineate the differences between this revision and revision 0 of the Hanford Site Technical Baseline Database. This information is being managed and maintained on the Hanford RDD-100 System, which uses the capabilities of RDD-100, a systems engineering software system of Ascent Logic Corporation (ALC). This revision of the Hanford Site Technical Baseline Database uses RDD-100 version 3.0.2.2 (see Table 3). Directories reflect those controlled by the Hanford RDD-100 System Administrator. Table 4 provides information regarding the platform. A cassette tape containing the Hanford Site Technical Baseline Database is available

  19. Application Program Interface for the Orion Aerodynamics Database

    Science.gov (United States)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The

  20. Virtual file system for PSDS

    Science.gov (United States)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  1. FileMaker Pro 9

    CERN Document Server

    Coffey, Geoff

    2007-01-01

    FileMaker Pro 9: The Missing Manual is the clear, thorough and accessible guide to the latest version of this popular desktop database program. FileMaker Pro lets you do almost anything with the information you give it. You can print corporate reports, plan your retirement, or run a small country -- if you know what you're doing. This book helps non-technical folks like you get in, get your database built, and get the results you need. Pronto.The new edition gives novices and experienced users the scoop on versions 8.5 and 9. It offers complete coverage of timesaving new features such as the Q

  2. EST data - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...st.zip File URL: ftp://ftp.biosciencedbc.jp/archive/red/LATEST/red_est.zip File size: 629 KB Simple search U...ase Database Description Download License Update History of This Database Site Policy | Contact Us EST data - RED | LSDB Archive ...

  3. Images - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ta file File name: rpsd_images.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rpsd/LATEST/rpsd_images.zip ... History of This Database Site Policy | Contact Us Images - RPSD | LSDB Archive ...

  4. Main - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ilable. Data file File name: place_main.zip File URL: ftp://ftp.biosciencedbc.jp/archive/place/LATEST/place_...se Update History of This Database Site Policy | Contact Us Main - PLACE | LSDB Archive ...

  5. GPCR Interaction - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...ed information (disease etc.). Data file File name: gripdb_main.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...godb.biosciencedbc.jp/togodb/view/gripdb_main#en Data acquisition method PDB, Refseq Data analysis method - ...Number of data entries 409 entries Data item Description GRIP ID Interaction ID Main Title Interaction title...tabase Database Description Download License Update History of This Database Site Policy | Contact Us GPCR Interaction - GRIPDB | LSDB Archive ...

  6. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  7. Patient Assessment File (PAF)

    Data.gov (United States)

    Department of Veterans Affairs — The Patient Assessment File (PAF) database compiles the results of the Patient Assessment Instrument (PAI) questionnaire filled out for intermediate care Veterans...

  8. Global Mammal Parasite Database version 2.0.

    Science.gov (United States)

    Stephens, Patrick R; Pappalardo, Paula; Huang, Shan; Byers, James E; Farrell, Maxwell J; Gehman, Alyssa; Ghai, Ria R; Haas, Sarah E; Han, Barbara; Park, Andrew W; Schmidt, John P; Altizer, Sonia; Ezenwa, Vanessa O; Nunn, Charles L

    2017-05-01

    Illuminating the ecological and evolutionary dynamics of parasites is one of the most pressing issues facing modern science, and is critical for basic science, the global economy, and human health. Extremely important to this effort are data on the disease-causing organisms of wild animal hosts (including viruses, bacteria, protozoa, helminths, arthropods, and fungi). Here we present an updated version of the Global Mammal Parasite Database, a database of the parasites of wild ungulates (artiodactyls and perissodactyls), carnivores, and primates, and make it available for download as complete flat files. The updated database has more than 24,000 entries in the main data file alone, representing data from over 2700 literature sources. We include data on sampling method and sample sizes when reported, as well as both "reported" and "corrected" (i.e., standardized) binomials for each host and parasite species. Also included are current higher taxonomies and data on transmission modes used by the majority of species of parasites in the database. In the associated metadata we describe the methods used to identify sources and extract data from the primary literature, how entries were checked for errors, methods used to georeference entries, and how host and parasite taxonomies were standardized across the database. We also provide definitions of the data fields in each of the four files that users can download. © 2017 by the Ecological Society of America.

  9. Solubility - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ta file File name: esol.zip File URL: ftp://ftp.biosciencedbc.jp/archive/esol/LAT... License Update History of This Database Site Policy | Contact Us Solubility - eSOL | LSDB Archive ...

  10. Microcomputer Database Management Systems for Bibliographic Data.

    Science.gov (United States)

    Pollard, Richard

    1986-01-01

    Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)

  11. The flux database concerted action

    International Nuclear Information System (INIS)

    Mitchell, N.G.; Donnelly, C.E.

    1999-01-01

    This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)

  12. Reliability analysis of a replication with limited number of journaling files

    International Nuclear Information System (INIS)

    Kimura, Mitsutaka; Imaizumi, Mitsuhiro; Nakagawa, Toshio

    2013-01-01

    Recently, replication mechanisms using journaling files have been widely used for the server systems. We have already discussed the model of asynchronous replication system using journaling files [8]. This paper formulates a stochastic model of a server system with replication considering the number of transmitting journaling files. The server updates the storage database and transmits the journaling file when a client requests the data update. The server transmits the database content to a backup site either at a constant time or after a constant number of transmitting journaling files. We derive the expected number of the replication and of transmitting journaling files. Further, we calculate the expected cost and discuss optimal replication interval to minimize it. Finally, numerical examples are given

  13. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  14. Recent developments and object-oriented approach in FTU database

    International Nuclear Information System (INIS)

    Bertocchi, A.; Bracco, G.; Buceti, G.; Centioli, C.; Iannone, F.; Manduchi, G.; Nanni, U.; Panella, M.; Stracuzzi, C.; Vitale, V.

    2001-01-01

    During the last two years, the experimental database of Frascati Tokamak Upgrade (FTU) has been changed from several points of view, particularly: (i) the data and the analysis codes have been moved from the IBM main frame to Unix platforms making enabling the users to take advantage of the large quantities of commercial and free software available under Unix (Matlab, IDL, etc); (ii) AFS (Andrew File System) has been chosen as the distributed file system making the data available on all the nodes and distributing the workload; (iii) 'One measure/one file' philosophy (vs. the previous 'one pulse/one file') has been adopted increasing the number of files into the database but, at the same time, allowing the most important data to be available just after the plasma discharge. The client-server architecture has been tested using the signal viewer client jScope. Moreover, an object oriented data model (OODM) of FTU experimental data has been tried: a generalized model in tokamak experimental data has been developed with typical concepts such as abstraction, encapsulation, inheritance, and polymorphism. The model has been integrated with data coming from different databases, building an Object Warehouse to extract, with data mining techniques, meaningful trends and patterns from huge amounts of data

  15. Main - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... as rice. Data file File name: rpsd_main_sjis.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rpsd/LATEST/r.../ftp.biosciencedbc.jp/archive/rpsd/LATEST/rpsd_main_utf8.zip File size: 120 KB Simple search URL http://togo...nse Update History of This Database Site Policy | Contact Us Main - RPSD | LSDB Archive ...

  16. Patient Treatment File (PTF)

    Data.gov (United States)

    Department of Veterans Affairs — This database is part of the National Medical Information System (NMIS). The Patient Treatment File (PTF) contains a record for each inpatient care episode provided...

  17. NVST Data Archiving System Based On FastBit NoSQL Database

    Science.gov (United States)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  18. OPS index - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...listed. Data file File name: kome_ops_index.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kom...e History of This Database Site Policy | Contact Us OPS index - KOME | LSDB Archive ...

  19. Network File - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available at file For CSML (Cell System Markup Language), see also the CSML website. CSML files may be graphically vie... and simulated at Cell Illustrator Online. For these pieces of software, see also the Cell Illustrator websi...te or the Cell Illustrator Online website. Legend in Fundamental Biology Legend in Medicine/Pharmacology Leg

  20. Construction of image database for newspapaer articles using CTS

    Science.gov (United States)

    Kamio, Tatsuo

    Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.

  1. EST Table - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available uences registered to public database as of September 2011. Data file File name: kaiko_cdna_main.zip File URL...: ftp://ftp.biosciencedbc.jp/archive/kaiko-cdna/LATEST/kaiko_cdna_main.zip File s...ize: 157 MB Simple search URL http://togodb.biosciencedbc.jp/togodb/view/kaiko_cdna_main#en Data acquisition

  2. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    Science.gov (United States)

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  3. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX

    International Nuclear Information System (INIS)

    Sanchez, E.; Milligen, B.Ph. van

    1997-01-01

    Several tools have been developed to access the TJ-I and TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE. CAMAC and FORTRAN un formatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN un formatted files defined herein. from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author) 5 refs

  4. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  5. Automated testing of arrhythmia monitors using annotated databases.

    Science.gov (United States)

    Elghazzawi, Z; Murray, W; Porter, M; Ezekiel, E; Goodall, M; Staats, S; Geheb, F

    1992-01-01

    Arrhythmia-algorithm performance is typically tested using the AHA and MIT/BIH databases. The tools for this test are simulation software programs. While these simulations provide rapid results, they neglect hardware and software effects in the monitor. To provide a more accurate measure of performance in the actual monitor, a system has been developed for automated arrhythmia testing. The testing system incorporates an IBM-compatible personal computer, a digital-to-analog converter, an RS232 board, a patient-simulator interface to the monitor, and a multi-tasking software package for data conversion and communication with the monitor. This system "plays" patient data files into the monitor and saves beat classifications in detection files. Tests were performed using the MIT/BIH and AHA databases. Statistics were generated by comparing the detection files with the annotation files. These statistics were marginally different from those that resulted from the simulation. Differences were then examined. As expected, the differences were related to monitor hardware effects.

  6. Enhancers - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...d phase2.0 Data file File name: Enhancers File URL: ftp://ftp.biosciencedbc.jp/archive/fantom...load License Update History of This Database Site Policy | Contact Us Enhancers - FANTOM5 | LSDB Archive ...

  7. VariVis: a visualisation toolkit for variation databases

    Directory of Open Access Journals (Sweden)

    Smith Timothy D

    2008-04-01

    Full Text Available Abstract Background With the completion of the Human Genome Project and recent advancements in mutation detection technologies, the volume of data available on genetic variations has risen considerably. These data are stored in online variation databases and provide important clues to the cause of diseases and potential side effects or resistance to drugs. However, the data presentation techniques employed by most of these databases make them difficult to use and understand. Results Here we present a visualisation toolkit that can be employed by online variation databases to generate graphical models of gene sequence with corresponding variations and their consequences. The VariVis software package can run on any web server capable of executing Perl CGI scripts and can interface with numerous Database Management Systems and "flat-file" data files. VariVis produces two easily understandable graphical depictions of any gene sequence and matches these with variant data. While developed with the goal of improving the utility of human variation databases, the VariVis package can be used in any variation database to enhance utilisation of, and access to, critical information.

  8. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  9. SIDS-toADF File Mapping Manual

    Science.gov (United States)

    McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

    2002-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of

  10. A Centralized Control and Dynamic Dispatch Architecture for File Integrity Analysis

    Directory of Open Access Journals (Sweden)

    Ronald DeMara

    2006-02-01

    Full Text Available The ability to monitor computer file systems for unauthorized changes is a powerful administrative tool. Ideally this task could be performed remotely under the direction of the administrator to allow on-demand checking, and use of tailorable reporting and exception policies targeted to adjustable groups of network elements. This paper introduces M-FICA, a Mobile File Integrity and Consistency Analyzer as a prototype to achieve this capability using mobile agents. The M-FICA file tampering detection approach uses MD5 message digests to identify file changes. Two agent types, Initiator and Examiner, are used to perform file integrity tasks. An Initiator travels to client systems, computes a file digest, then stores those digests in a database file located on write-once media. An Examiner agent computes a new digest to compare with the original digests in the database file. Changes in digest values indicate that the file contents have been modified. The design and evaluation results for a prototype developed in the Concordia agent framework are described.

  11. Mars Global Digital Dune Database; MC-1

    Science.gov (United States)

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also

  12. Some aspects of the file organization and retrieval strategy in large data-bases

    International Nuclear Information System (INIS)

    Arnaudov, D.D.; Govorun, N.N.

    1977-01-01

    Methods of organizing a big information retrieval system are discribed. A special attention is paid to the file organization. An adapting file structure is described in more detail. The discussed method gives one the opportunity to organize large files in such a way that the response time of the system can be minimized, when the file is increasing. In connection with the retrieval strategy a method is proposed, which uses the frequencies of the descr/iptors and the couples of the descriptors to forecast the expected number of the relevant documents. Programmes are made, on the base of these methods, which are used in the information retrieval systems of JINR

  13. Development of operation management database for research reactors

    International Nuclear Information System (INIS)

    Zhang Xinjun; Chen Wei; Yang Jun

    2005-01-01

    An Operation Database for Pulsed Reactor has been developed on the platform for Microsoft visual C++ 6.0. This database includes four function modules, fuel elements management, incident management, experiment management and file management. It is essential for reactor security and information management. (authors)

  14. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  15. List of isozyme loci - RGP gmap98 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RGP gmap98 List of isozyme loci Data detail Data name List of isozyme loci DOI 10.18908/lsdb...he present high-density linkage map, and that were putatively identified as isozyme genes. Data file File name: rgp_gmap98_iso...gmap98/LATEST/rgp_gmap98_isozyme_loci.zip File size: 611 B Simple search URL http://togodb.biosciencedbc.jp/...0001 were considered as functionally identical clones. And we have selected the ones that hit the isozyme ge...his Database Database Description Download License Update History of This Database Site Policy | Contact Us List of isozyme loci - RGP gmap98 | LSDB Archive ...

  16. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  17. The Consolidated Human Activity Database — Master Version (CHAD-Master) Technical Memorandum

    Science.gov (United States)

    This technical memorandum contains information about the Consolidated Human Activity Database -- Master version, including CHAD contents, inventory of variables: Questionnaire files and Event files, CHAD codes, and references.

  18. Users' satisfaction with the use of electronic database in university ...

    African Journals Online (AJOL)

    Users' satisfaction with the use of electronic database in university libraries in north ... file of digitized information (bibliographic records, abstracts, full-text documents, ... managed with the aid of database management system (DBMS) software.

  19. The new ENSDF search system NESSY: IBM/PC nuclear spectroscopy database

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.

    1996-01-01

    The universal relational nuclear structure and decay database NESSY (New ENSDF Search SYstem) developed for the IBM/PC and compatible PCs, and based on the international file ENSDF (Evaluated Nuclear Structure Data File), is described. The NESSY provides the possibility of high efficiency processing (the search and retrieval of any kind of physical data) of the information from ENSDF. The principles of the database development are described and examples of applications are presented. (orig.)

  20. Alignment - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e URL: ftp://ftp.biosciencedbc.jp/archive/sahg/LATEST/sahg_alignment.zip File size: 12.0 MB Simple search UR...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Alignment - SAHG | LSDB Archive ...

  1. The ALADDIN atomic physics database system

    International Nuclear Information System (INIS)

    Hulse, R.A.

    1990-01-01

    ALADDIN is an atomic physics database system which has been developed in order to provide a broadly-based standard medium for the exchange and management of atomic data. ALADDIN consists of a data format definition together with supporting software for both interactive searches as well as for access to the data by plasma modeling and other codes. 8AB The ALADDIN system is designed to offer maximum flexibility in the choice of data representations and labeling schemes, so as to support a wide range of atomic physics data types and allow natural evolution and modification of the database as needs change. Associated dictionary files are included in the ALADDIN system for data documentation. The importance of supporting the widest possible user community was also central to be ALADDIN design, leading to the use of straightforward text files with concatentated data entries for the file structure, and the adoption of strict FORTRAN 77 code for the supporting software. This will allow ready access to the ALADDIN system on the widest range of scientific computers, and easy interfacing with FORTRAN modeling codes, user developed atomic physics codes and database, etc. This supporting software consists of the ALADDIN interactive searching and data display code, together with the ALPACK subroutine package which provides ALADDIN datafile searching and data retrieval capabilities to user's codes

  2. Yucca Mountain Project bibliography, 1988--1989

    International Nuclear Information System (INIS)

    Lorenz, J.J.

    1990-11-01

    This bibliography contains information on the Yucca Mountain Project that was added to the Department of Energy's Energy Data Base from January 1988 through December 1989. This supplement also includes a new section which provides information about publications on the Energy Data Base that were not sponsored by the project but have some relevance to it. The bibliography is categorized by principal project participating organization. Participant-sponsored subcontractor reports, papers, and articles are included in the sponsoring organization's list. Indexes are provided for Corporate Author, Personal Author, Subject, Contract Number, Report Number, Order Number Correlation, and Key Word in Context. All entries in the Yucca Mountain Project bibliographies are searchable online on the NNW database file. This file can be accessed through the Integrated Technical Information System (ITIS) of the US Department of Energy (DOE). Technical reports on the Yucca Mountain Project are on display in special open files at participating Nevada Libraries and in the Public Document Room of the US Department of Energy, Nevada Operations Office, in Las Vegas

  3. Locus - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...URL: ftp://ftp.biosciencedbc.jp/archive/astra/LATEST/astra_locus.zip File size: 887 KB Simple search URL htt...icing type (ex. cassette) About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Locus - ASTRA | LSDB Archive ...

  4. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database; TOPICAL

    International Nuclear Information System (INIS)

    Brown, S

    2001-01-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam. The data sets within this database are provided in three file formats: ARC/INFO(trademark) exported integer grids, ASCII (American Standard Code for Information Interchange) files formatted for raster-based GIS software packages, and generic ASCII files with x, y coordinates for use with non-GIS software packages

  5. Protein - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ..._protein.zip File URL: ftp://ftp.biosciencedbc.jp/archive/at_atlas/LATEST/at_atla...About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Protein - AT Atlas | LSDB Archive ...

  6. (reprocessed)CAGE peaks - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...g38/mm10). Data file File name: (reprocessed)CAGE_peaks (Homo sapiens) File URL: ftp://ftp.biosciencedbc.jp/archive/fantom...)CAGE_peaks (Mus musculus) File URL: ftp://ftp.biosciencedbc.jp/archive/fantom5/d...his Database Site Policy | Contact Us (reprocessed)CAGE peaks - FANTOM5 | LSDB Archive ...

  7. All 5' EST - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...n of data contents 5' EST sequences Data file File name: CSV: kome_est_5end_all.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...fasta.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_est_5end_...se Description Download License Update History of This Database Site Policy | Contact Us All 5' EST - KOME | LSDB Archive ...

  8. All 3' EST - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...n of data contents 3' EST sequences Data file File name: CSV: kome_est_3end_all.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...fasta.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_est_3end_...se Description Download License Update History of This Database Site Policy | Contact Us All 3' EST - KOME | LSDB Archive ...

  9. Designing for Peta-Scale in the LSST Database

    Science.gov (United States)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  10. Results of de-novo and Motif activity analyses - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM... JASPAR) Data file File name: Motifs File URL: ftp://ftp.biosciencedbc.jp/archive/fantom5/datafiles/phase1.3...tabase Database Description Download License Update History of This Database Site Policy | Contact Us Results of de-novo and Motif activity analyses - FANTOM5 | LSDB Archive ...

  11. Curve collection, extension of databases

    International Nuclear Information System (INIS)

    Gillemot, F.

    1992-01-01

    Full text: Databases: generally calculated data only. The original measurements: diagrams. Information loss between them Expensive research eg. irradiation, aging, creep etc. Original curves should be stored for reanalysing. The format of the stored curves: a. Data in ASCII files, only numbers b. Other information in strings in a second file Same name, but different extension. Extensions shows the type of the test and the type of the file. EXAMPLES. TEN is tensile information, TED is tensile data, CHN is Charpy informations, CHD is Charpy data. Storing techniques: digitalised measurements, digitalising old curves stored on paper. Use: making catalogues, reanalysing, comparison with new data. Tools: mathematical software packages like quattro, genplot, exel, mathcad, qbasic, pascal, fortran, mathlab, grapher etc. (author)

  12. NCPC Central Files Information System (CFIS)

    Data.gov (United States)

    National Capital Planning Commission — This dataset contains records from NCPC's Central Files Information System (CFIS), which is a comprehensive database of projects submitted to NCPC for design review...

  13. CAGE_peaks_annotation - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...file File name: CAGE_peaks_annotation File URL: ftp://ftp.biosciencedbc.jp/archive/fantom...on Download License Update History of This Database Site Policy | Contact Us CAGE_peaks_annotation - FANTOM5 | LSDB Archive ...

  14. Main data - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ftp://ftp.biosciencedbc.jp/archive/rmg/LATEST/rmg_main.zip File size: 1 KB Simple search URL http://togodb.b... This Database Database Description Download License Update History of This Database Site Policy | Contact Us Main data - RMG | LSDB Archive ...

  15. Systematic pseudopotentials from reference eigenvalue sets for DFT calculations: Pseudopotential files

    Directory of Open Access Journals (Sweden)

    Pablo Rivero

    2015-06-01

    Full Text Available We present in this article a pseudopotential (PP database for DFT calculations in the context of the SIESTA code [1–3]. Comprehensive optimized PPs in two formats (psf files and input files for ATM program are provided for 20 chemical elements for LDA and GGA exchange-correlation potentials. Our data represents a validated database of PPs for SIESTA DFT calculations. Extensive transferability tests guarantee the usefulness of these PPs.

  16. The IPE Database: providing information on plant design, core damage frequency and containment performance

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Su, T.; Danziger, L.

    1996-01-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission's (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database

  17. A database for TMT interface control documents

    Science.gov (United States)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  18. Report on Approaches to Database Translation. Final Report.

    Science.gov (United States)

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  19. Spot table - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...d_spot.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_spot.zip F... cDNA. (multiple entries) About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Spot table - RPD | LSDB Archive ...

  20. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  1. Database citation in supplementary data linked to Europe PubMed Central full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R

    2015-01-01

    In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.

  2. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  3. DRES Database of Methods for the Analysis of Chemical Warfare Agents

    National Research Council Canada - National Science Library

    D'Agostino, Paul

    1997-01-01

    .... Update of the database continues as an ongoing effort and the DRES Database of Methods for the Analysis of Chemical Warfare Agents is available panel in hardcopy form or as a softcopy Procite or Wordperfect file...

  4. Database Overlap vs. Complementary Coverage in Forestry and Forest Products: Factors in Database Acquisition.

    Science.gov (United States)

    Hoover, Ryan E.

    This study examines (1) subject content, (2) file size, (3) types of documents indexed, (4) range of years spanned, and (5) level of indexing and abstracting in five databases which collectively provide extensive coverage of the forestry and forest products industries: AGRICOLA, CAB ABSTRACTS, FOREST PRODUCTS (AIDS), PAPERCHEM, and PIRA. The…

  5. Cluster - ClEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e File name: clest_cluster.zip File URL: ftp://ftp.biosciencedbc.jp/archive/clest/LATEST/clest_cluster.zip F...ownload License Update History of This Database Site Policy | Contact Us Cluster - ClEST | LSDB Archive ...

  6. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  7. NIST/Sandia/ICDD Electron Diffraction Database: A Database for Phase Identification by Electron Diffraction.

    Science.gov (United States)

    Carr, M J; Chambers, W F; Melgaard, D; Himes, V L; Stalick, J K; Mighell, A D

    1989-01-01

    A new database containing crystallographic and chemical information designed especially for application to electron diffraction search/match and related problems has been developed. The new database was derived from two well-established x-ray diffraction databases, the JCPDS Powder Diffraction File and NBS CRYSTAL DATA, and incorporates 2 years of experience with an earlier version. It contains 71,142 entries, with space group and unit cell data for 59,612 of those. Unit cell and space group information were used, where available, to calculate patterns consisting of all allowed reflections with d -spacings greater than 0.8 A for ~ 59,000 of the entries. Calculated patterns are used in the database in preference to experimental x-ray data when both are available, since experimental x-ray data sometimes omits high d -spacing data which falls at low diffraction angles. Intensity data are not given when calculated spacings are used. A search scheme using chemistry and r -spacing (reciprocal d -spacing) has been developed. Other potentially searchable data in this new database include space group, Pearson symbol, unit cell edge lengths, reduced cell edge length, and reduced cell volume. Compound and/or mineral names, formulas, and journal references are included in the output, as well as pointers to corresponding entries in NBS CRYSTAL DATA and the Powder Diffraction File where more complete information may be obtained. Atom positions are not given. Rudimentary search software has been written to implement a chemistry and r -spacing bit map search. With typical data, a full search through ~ 71,000 compounds takes 10~20 seconds on a PDP 11/23-RL02 system.

  8. Gel table - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...d_main.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_main.zip File size: 1 KB Simple searc...iption Download License Update History of This Database Site Policy | Contact Us Gel table - RPD | LSDB Archive ...

  9. tRNA - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...File URL: ftp://ftp.biosciencedbc.jp/archive/rmg/LATEST/rmg_trna.zip File size: 1 KB Simple search URL http:...ption Download License Update History of This Database Site Policy | Contact Us tRNA - RMG | LSDB Archive ...

  10. Disease - MicrobeDB.jp | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...File name: disease.tar.gz File URL: ftp://ftp.biosciencedbc.jp/archive/microbedb/...iption Download License Update History of This Database Site Policy | Contact Us Disease - MicrobeDB.jp | LSDB Archive ...

  11. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    Science.gov (United States)

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  12. An Object-Relational Ifc Storage Model Based on Oracle Database

    Science.gov (United States)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  13. Efficient analysis and extraction of MS/MS result data from Mascot™ result files

    Directory of Open Access Journals (Sweden)

    Sickmann Albert

    2005-12-01

    Full Text Available Abstract Background Mascot™ is a commonly used protein identification program for MS as well as for tandem MS data. When analyzing huge shotgun proteomics datasets with Mascot™'s native tools, limits of computing resources are easily reached. Up to now no application has been available as open source that is capable of converting the full content of Mascot™ result files from the original MIME format into a database-compatible tabular format, allowing direct import into database management systems and efficient handling of huge datasets analyzed by Mascot™. Results A program called mres2x is presented, which reads Mascot™ result files, analyzes them and extracts either selected or all information in order to store it in a single file or multiple files in formats which are easier to handle downstream of Mascot™. It generates different output formats. The output of mres2x in tab format is especially designed for direct high-performance import into relational database management systems using native tools of these systems. Having the data available in database management systems allows complex queries and extensive analysis. In addition, the original peak lists can be extracted in DTA format suitable for protein identification using the Sequest™ program, and the Mascot™ files can be split, preserving the original data format. During conversion, several consistency checks are performed. mres2x is designed to provide high throughput processing combined with the possibility to be driven by other computer programs. The source code including supplement material and precompiled binaries is available via http://www.protein-ms.de and http://sourceforge.net/projects/protms/. Conclusion The database upload allows regrouping of the MS/MS results using a database management system and complex analyzing queries using SQL without the need to run new Mascot™ searches when changing grouping parameters.

  14. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  15. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    Peer-to-peer databases are becoming more prevalent on the internet for sharing and distributing applications, documents, files, and other digital media. The problem associated with answering large-scale ad hoc analysis queries, aggregation queries, on these databases poses unique challenges. This paper presents an ...

  16. AgdbNet – antigen sequence database software for bacterial typing

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2006-06-01

    Full Text Available Abstract Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated.

  17. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    Science.gov (United States)

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  18. Radiological digital teaching file development: an overview

    International Nuclear Information System (INIS)

    Scarsbrook, A.F.; Foley, P.T.; Perriss, R.W.; Graham, R.N.J.

    2005-01-01

    Radiologists are collectors of interesting films for teaching purposes or for use in presentations and publications. Traditionally, hard copies of films have been stored in an organized fashion, usually in a filing cabinet or film library. This system has inherent limitations, such as the physical space required. Many of the shortcomings can be circumvented by development of an electronic teaching file. Whereas the implementation of an institutional radiological digital image database can require significant developmental effort and programming expertise, there are a number of web-based solutions which are freely available and can be relatively easily employed to establish a contemporary electronic image library. This article will review the various options and discuss the process of developing a digital image database

  19. GENISES: A GIS Database for the Yucca Mountain Site Characterization Project

    International Nuclear Information System (INIS)

    Beckett, J.

    1991-01-01

    This paper provides a general description of the Geographic Nodal Information Study and Evaluation System (GENISES) database design. The GENISES database is the Geographic Information System (GIS) component of the Yucca Mountain Site Characterization Project Technical Database (TDB). The GENISES database has been developed and is maintained by EG ampersand G Energy Measurements, Inc., Las Vegas, NV (EG ampersand G/EM). As part of the Yucca Mountain Project (YMP) Site Characterization Technical Data Management System, GENISES provides a repository for geographically oriented technical data. The primary objective of the GENISES database is to support the Yucca Mountain Site Characterization Project with an effective tool for describing, analyzing, and archiving geo-referenced data. The database design provides the maximum efficiency in input/output, data analysis, data management and information display. This paper provides the systematic approach or plan for the GENISES database design and operation. The paper also discusses the techniques used for data normalization or the decomposition of complex data structures as they apply to GIS database. ARC/INFO and INGRES files are linked or joined by establishing ''relate'' fields through the common attribute names. Thus, through these keys, ARC can allow access to normalized INGRES files greatly reducing redundancy and the size of the database

  20. The NGDC Seafloor Sediment Grain Size Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGDC (now NCEI) Seafloor Sediment Grain Size Database contains particle size data for over 17,000 seafloor samples worldwide. The file was begun by NGDC in 1976...

  1. Yucca Mountain Site Characterization Project bibliography, 1992--1994. Supplement 4

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-06-01

    Following a reorganization of the Office of Civilian Radioactive Waste Management in 1990, the Yucca Mountain Project was renamed Yucca Mountain Site Characterization Project. The title of this bibliography was also changed to Yucca Mountain Site Characterization Project Bibliography. Prior to August 5, 1988, this project was called the Nevada Nuclear Waste Storage Investigations. This bibliography contains information on this ongoing project that was added to the Department of Energy`s Energy Science and Technology Database from January 1, 1992, through December 31, 1993. The bibliography is categorized by principal project participating organization. Participant-sponsored subcontractor reports, papers, and articles are included in the sponsoring organization`s list. Another section contains information about publications on the Energy Science and Technology Database that were not sponsored by the project but have some relevance to it. Earlier information on this project can be found in the first bibliography DOE/TIC-3406, which covers 1977--1985, and its three supplements DOE/OSTI-3406(Suppl.1), DOE/OSTI-3406(Suppl.2), and DOE/OSTI-3406(Suppl.3), which cover information obtained during 1986--1987, 1988--1989, and 1990--1991, respectively. All entries in the bibliographies are searchable online on the NNW database file. This file can be accessed through the Integrated Technical Information System (ITIS) of the US Department of Energy (DOE).

  2. Yucca Mountain Site Characterization Project bibliography, 1992--1993. Supplement 4

    International Nuclear Information System (INIS)

    1992-06-01

    Following a reorganization of the Office of Civilian Radioactive Waste Management in 1990, the Yucca Mountain Project was renamed Yucca Mountain Site Characterization Project. The title of this bibliography was also changed to Yucca Mountain Site Characterization Project Bibliography. Prior to August 5, 1988, this project was called the Nevada Nuclear Waste Storage Investigations. This bibliography contains information on this ongoing project that was added to the Department of Energy's Energy Science and Technology Database from January 1, 1992, through December 31, 1993. The bibliography is categorized by principal project participating organization. Participant-sponsored subcontractor reports, papers, and articles are included in the sponsoring organization's list. Another section contains information about publications on the Energy Science and Technology Database that were not sponsored by the project but have some relevance to it. Earlier information on this project can be found in the first bibliography DOE/TIC-3406, which covers 1977--1985, and its three supplements DOE/OSTI-3406(Suppl.1), DOE/OSTI-3406(Suppl.2), and DOE/OSTI-3406(Suppl.3), which cover information obtained during 1986--1987, 1988--1989, and 1990--1991, respectively. All entries in the bibliographies are searchable online on the NNW database file. This file can be accessed through the Integrated Technical Information System (ITIS) of the US Department of Energy (DOE)

  3. Main - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... name: at_atlas_en.zip File URL: ftp://ftp.biosciencedbc.jp/archive/at_atlas/LATE... Database Description Download License Update History of This Database Site Policy | Contact Us Main - AT Atlas | LSDB Archive ...

  4. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  5. Spatial Digital Database for the Geologic Map of Oregon

    Science.gov (United States)

    Walker, George W.; MacLeod, Norman S.; Miller, Robert J.; Raines, Gary L.; Connors, Katherine A.

    2003-01-01

    Introduction This report describes and makes available a geologic digital spatial database (orgeo) representing the geologic map of Oregon (Walker and MacLeod, 1991). The original paper publication was printed as a single map sheet at a scale of 1:500,000, accompanied by a second sheet containing map unit descriptions and ancillary data. A digital version of the Walker and MacLeod (1991) map was included in Raines and others (1996). The dataset provided by this open-file report supersedes the earlier published digital version (Raines and others, 1996). This digital spatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information for use in spatial analysis in a geographic information system (GIS). This database can be queried in many ways to produce a variety of geologic maps. This database is not meant to be used or displayed at any scale larger than 1:500,000 (for example, 1:100,000). This report describes the methods used to convert the geologic map data into a digital format, describes the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. Scanned images of the printed map (Walker and MacLeod, 1991), their correlation of map units, and their explanation of map symbols are also available for download.

  6. ESPSD, Nuclear Power Plant Siting Database

    International Nuclear Information System (INIS)

    Slezak, S.

    2001-01-01

    1 - Description of program or function: This database is a repository of comprehensive licensing and technical reviews of siting regulatory processes and acceptance criteria for advanced light water reactor (ALWR) nuclear power plants. The program is designed to be used by applicants for an early site permit or combined construction permit/operating license (10CFRR522), Sub-parts A and C) as input for the development of the application. The database is a complete, menu-driven, self-contained package that can search and sort the supplied data by topic, keyword, or other input. The software is designed for operation on IBM compatible computers with DOS. 2 - Method of solution: The database is an R:BASE Runtime program with all the necessary database files included

  7. New developments in file-based infrastructure for ATLAS event selection

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D M [Argonne National Laboratory, Argonne, Illinois 60439 (United States); Nowak, M, E-mail: gemmeren@anl.go [Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)

    2010-04-01

    In ATLAS software, TAGs are event metadata records that can be stored in various technologies, including ROOT files and relational databases. TAGs are used to identify and extract events that satisfy certain selection predicates, which can be coded as SQL-style queries. TAG collection files support in-file metadata to store information describing all events in the collection. Event Selector functionality has been augmented to provide such collection-level metadata to subsequent algorithms. The ATLAS I/O framework has been extended to allow computational processing of TAG attributes to select or reject events without reading the event data. This capability enables physicists to use more detailed selection criteria than are feasible in an SQL query. For example, the TAGs contain enough information not only to check the number of electrons, but also to calculate their distance to the closest jet-a calculation that would be difficult to express in SQL. Another new development allows ATLAS to write TAGs directly into event data files. This feature can improve performance by supporting advanced event selection capabilities, including computational processing of TAG information, without the need for external TAG file or database access.

  8. The ABC (Analysing Biomolecular Contacts-database

    Directory of Open Access Journals (Sweden)

    Walter Peter

    2007-03-01

    Full Text Available As protein-protein interactions are one of the basic mechanisms in most cellular processes, it is desirable to understand the molecular details of protein-protein contacts and ultimately be able to predict which proteins interact. Interface areas on a protein surface that are involved in protein interactions exhibit certain characteristics. Therefore, several attempts were made to distinguish protein interactions from each other and to categorize them. One way of classification are the groups of transient and permanent interactions. Previously two of the authors analysed several properties for transient complexes such as the amino acid and secondary structure element composition and pairing preferences. Certainly, interfaces can be characterized by many more possible attributes and this is a subject of intense ongoing research. Although several freely available online databases exist that illuminate various aspects of protein-protein interactions, we decided to construct a new database collecting all desired interface features allowing for facile selection of subsets of complexes. As database-server we applied MySQL and the program logic was written in JAVA. Furthermore several class extensions and tools such as JMOL were included to visualize the interfaces and JfreeChart for the representation of diagrams and statistics. The contact data is automatically generated from standard PDB files by a tcl/tk-script running through the molecular visualization package VMD. Currently the database contains 536 interfaces extracted from 479 PDB files and it can be queried by various types of parameters. Here, we describe the database design and demonstrate its usefulness with a number of selected features.

  9. The European Southern Observatory-MIDAS table file system

    Science.gov (United States)

    Peron, M.; Grosbol, P.

    1992-01-01

    The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

  10. ATLAS DDM/DQ2 & NoSQL databases: Use cases and experiences

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    NoSQL databases. This includes distributed file system like HDFS that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value/document stores, like HBase, Cassandra or MongoDB. These databases provide solutions to particular types...

  11. HCUP State Inpatient Databases (SID) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Inpatient Databases (SID) contain the universe of hospital inpatient discharge abstracts in States participating in HCUP that release their data through...

  12. MySQL/PHP web database applications for IPAC proposal submission

    Science.gov (United States)

    Crane, Megan K.; Storrie-Lombardi, Lisa J.; Silbermann, Nancy A.; Rebull, Luisa M.

    2008-07-01

    The Infrared Processing and Analysis Center (IPAC) is NASA's multi-mission center of expertise for long-wavelength astrophysics. Proposals for various IPAC missions and programs are ingested via MySQL/PHP web database applications. Proposers use web forms to enter coversheet information and upload PDF files related to the proposal. Upon proposal submission, a unique directory is created on the webserver into which all of the uploaded files are placed. The coversheet information is converted into a PDF file using a PHP extension called FPDF. The files are concatenated into one PDF file using the command-line tool pdftk and then forwarded to the review committee. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.

  13. The crystallographic information file (CIF): A new standard archive file for crystallography

    International Nuclear Information System (INIS)

    Hall, S.R.; Allen, F.H.; Brown, I.D.

    1991-01-01

    The specification of a new standard Crystallographic Information File (CIF) is described. Its development is based on the Self-Defining Text Archieve and Retrieval (STAR) procedure. The CIF is a general, flexible and easily extensible free-format archive file; it is human and machine readable and can be edited by a simple editor. The CIF is designed for the electronic transmission of crystallographic data between individual laboratories, journals and databases: It has been adopted by the International Union of Crystallography as the recommended medium for this purpose. The file consists of data names and data items, together with a loop facility for repeated items. The data names, constructed hierarchically so as to form data categories, are self-descriptive within a 32-character limit. The sorted list of data names, together with their precise definitions, constitutes the CIF dictionary (core version 1991). The CIF core dictionary is presented in full and covers the fundamental and most commonly used data items relevant to crystal structure analysis. The dictionary is also available as an electronic file suitable for CIF computer applications. Future extensions to the dictionary will include data items used in more specialized areas of crystallography. (orig.)

  14. Analysis list - ChIP-Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...://ftp.biosciencedbc.jp/archive/chip-atlas/LATEST/chip_atlas_analysis_list.zip File size: 44.8 KB Simple sea...e class. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Analysis list - ChIP-Atlas | LSDB Archive ...

  15. YAC contig information - RGP physicalmap | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available 8908/lsdba.nbdc00318-06-001 Description of data contents YAC contigs on the rice chromosomes Data file File name: rgp_physical...map_yac_contigs.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rgp-physicalmap/LATEST/rgp_physical...sciencedbc.jp/togodb/view/rgp_physicalmap_yac_contigs#en Data acquisition method The range including YAC con...m Description Chrom. No. Chromosome number Region Region number Physical map image The file name of rice physical...n Download License Update History of This Database Site Policy | Contact Us YAC contig information - RGP physicalmap | LSDB Archive ...

  16. Physician Fee Schedule National Payment Amount File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The significant size of the Physician Fee Schedule Payment Amount File-National requires that database programs (e.g., Access, dBase, FoxPro, etc.) be used to read...

  17. The Androgen Receptor Gene Mutations Database.

    Science.gov (United States)

    Gottlieb, B; Lehvaslaiho, H; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1998-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 272 to 309 in the past year. We have expanded the database: (i) by giving each entry an accession number; (ii) by adding information on the length of polymorphic polyglutamine (polyGln) and polyglycine (polyGly) tracts in exon 1; (iii) by adding information on large gene deletions; (iv) by providing a direct link with a completely searchable database (courtesy EMBL-European Bioinformatics Institute). The addition of the exon 1 polymorphisms is discussed in light of their possible relevance as markers for predisposition to prostate or breast cancer. The database is also available on the internet (http://www.mcgill. ca/androgendb/ ), from EMBL-European Bioinformatics Institute (ftp. ebi.ac.uk/pub/databases/androgen ), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca).

  18. ORFer--retrieval of protein sequences and open reading frames from GenBank and storage into relational databases or text files.

    Science.gov (United States)

    Büssow, Konrad; Hoffmann, Steve; Sievert, Volker

    2002-12-19

    Functional genomics involves the parallel experimentation with large sets of proteins. This requires management of large sets of open reading frames as a prerequisite of the cloning and recombinant expression of these proteins. A Java program was developed for retrieval of protein and nucleic acid sequences and annotations from NCBI GenBank, using the XML sequence format. Annotations retrieved by ORFer include sequence name, organism and also the completeness of the sequence. The program has a graphical user interface, although it can be used in a non-interactive mode. For protein sequences, the program also extracts the open reading frame sequence, if available, and checks its correct translation. ORFer accepts user input in the form of single or lists of GenBank GI identifiers or accession numbers. It can be used to extract complete sets of open reading frames and protein sequences from any kind of GenBank sequence entry, including complete genomes or chromosomes. Sequences are either stored with their features in a relational database or can be exported as text files in Fasta or tabulator delimited format. The ORFer program is freely available at http://www.proteinstrukturfabrik.de/orfer. The ORFer program allows for fast retrieval of DNA sequences, protein sequences and their open reading frames and sequence annotations from GenBank. Furthermore, storage of sequences and features in a relational database is supported. Such a database can supplement a laboratory information system (LIMS) with appropriate sequence information.

  19. Description of geological data in SKBs database GEOTAB

    International Nuclear Information System (INIS)

    Stark, T.

    1988-01-01

    Measurements for the characterization of geological, geophysical, hydrogeological and hydrochemical condition have been performed since 1977 in specific site investigation as well as for geoscientific projects. The database comprises four main groups of data volumes. These are: geological data, geophysical data, hydrogeological data, and hydrochemical data. In the database, background information from the investigations and results are stored on-line on the VAX 750, while raw data are either stored on-line or on magnetic tapes. This report deals with geological data and describes the dataflow from the measurements at the sites to the result tables in the database. All of the geological investigations were carried out by the Swedish Geological Survey, and since July 1982 by Swedish Geological Co, SGAB. The geological investigations have been divided into three categories, and each category is stored separately in the database. The are: surface factures, core mapping, and chemical analyses. At SGU/SGAB the geological data were stored on-line on-line on a PRIME 750 mini computer, on microcomputer floppy disks or in filed paper protocols. During 1987 the data files were transferred from SGAB to datafiles on the VAX computer. In the report the data flow of each of the three geological information categories are described separately. (L.E.)

  20. An evaluated neutronic data file for bismuth

    International Nuclear Information System (INIS)

    Guenther, P.T.; Lawson, R.D.; Meadows, J.W.; Smith, A.B.; Smith, D.L.; Sugimoto, M.; Howerton, R.J.

    1989-11-01

    A comprehensive evaluated neutronic data file for bismuth, extending from 10 -5 eV to 20.0 MeV, is described. The experimental database, the application of the theoretical models, and the evaluation rationale are outlined. Attention is given to uncertainty specification, and comparisons are made with the prior ENDF/B-V evaluation. The corresponding numerical file, in ENDF/B-VI format, has been transmitted to the National Nuclear Data Center, Brookhaven National Laboratory. 106 refs., 10 figs., 6 tabs

  1. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  2. The evaluated gamma-ray activation file (EGAF)

    International Nuclear Information System (INIS)

    Firestone, R.B.; Molnar, G.L.; Revay, Zs.; Belgya, T.; McNabb, D.P.; Sleaford, B.W.

    2004-01-01

    The Evaluated Gamma-ray Activation File (EGAF), a new database of prompt and delayed neutron capture g-ray cross sections, has been prepared as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project to develop a ''Database of Prompt Gamma-rays from Slow Neutron Capture for Elemental Analysis.'' Recent elemental g-ray cross-section measurements performed with the guided neutron beam at the Budapest Reactor have been combined with data from the literature to produce the EGAF database. EGAF contains thermal cross sections for ∼ 35,000 prompt and delayed g-rays from 262 isotopes. New precise total thermal radiative cross sections have been derived for many isotopes from the primary and secondary gamma-ray cross sections and additional level scheme data. An IAEA TECDOC describing the EGAF evaluation and tabulating the most prominent g-rays will be published in 2004. The TECDOC will include a CD-ROM containing the EGAF database in both ENSDF and tabular formats with an interactive viewer for searching and displaying the data. The Isotopes Project, Lawrence Berkeley National Laboratory continues to maintain and update the EGAF file. These data are available on the Internet from both the IAEA and Isotopes Project websites

  3. Proteomics: Protein Identification Using Online Databases

    Science.gov (United States)

    Eurich, Chris; Fields, Peter A.; Rice, Elizabeth

    2012-01-01

    Proteomics is an emerging area of systems biology that allows simultaneous study of thousands of proteins expressed in cells, tissues, or whole organisms. We have developed this activity to enable high school or college students to explore proteomic databases using mass spectrometry data files generated from yeast proteins in a college laboratory…

  4. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  5. Database description for the biosphere code BIOMOD

    International Nuclear Information System (INIS)

    Kane, P.; Thorne, M.C.; Coughtrey, P.J.

    1983-03-01

    The development of a biosphere model for use in comparative radiological assessments of UK low and intermediate level waste repositories is discussed. The nature, content and sources of data contained in the four files that comprise the database for the biosphere code BIOMOD are described. (author)

  6. Foundations of database systems : an introductory tutorial

    NARCIS (Netherlands)

    Paredaens, J.; Paredaens, J.; Tenenbaum, L. A.

    1994-01-01

    A very short overview is given of the principles of databases. The entity relationship model is used to define the conceptual base. Furthermore file management, the hierarchical model, the network model, the relational model and the object oriented model are discussed During the second world war,

  7. The HITRAN 2008 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Gordon, I.E.; Barbe, A.; Benner, D.Chris; Bernath, P.F.; Birk, M.; Boudon, V.; Brown, L.R.; Campargue, A.; Champion, J.-P.; Chance, K.; Coudert, L.H.; Dana, V.; Devi, V.M.; Fally, S.; Flaud, J.-M.

    2009-01-01

    This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e. spectra in which the individual lines are not resolved; individual line parameters and absorption cross-sections for bands in the ultraviolet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 42 molecules including many of their isotopologues.

  8. Rhinoplasty perioperative database using a personal digital assistant.

    Science.gov (United States)

    Kotler, Howard S

    2004-01-01

    To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.

  9. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  10. User's and reference guide to the INEL RML/analytical radiochemistry sample tracking database version 1.00

    International Nuclear Information System (INIS)

    Femec, D.A.

    1995-09-01

    This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user's guide and a reference guide. The user's guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMA and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices

  11. A study on relational ENSDF databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Song Xiangxiang; Ye Weiguo; Liu Wenlong; Feng Yuqing; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Guo Zhiyu; Huang Xiaolong; Liu Tingjin; China Inst. of Atomic Energy, Beijing

    2007-01-01

    A relational ENSDF library software is designed and released. Using relational databases, object-oriented programming and web-based technology, this software offers online data services of a centralized repository of data, including international ENSDF files for nuclear structure and decay data. The software can easily reconstruct nuclear data in original ENSDF format from the relational database. The computer programs providing support for database management and online data services via the Internet are based on the Linux implementation of PHP and the MySQL software, and platform independent in a wider sense. (authors)

  12. ATLAS database application enhancements using Oracle 11g

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemas (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have...

  13. NESSY, a relational PC database for nuclear structure and decay data

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.; Trukhanov, S.K.

    1994-11-01

    The universal relational database NESSY (New ENSDF Search SYstem) based on the international ENSDF system (Evaluated Nuclear Structure Data File) is described. NESSY, which was developed for IBM compatible PC, provides high efficiency processing of ENSDF information for searches and retrievals of nuclear physics data. The principle of the database development and examples of applications are presented. (author)

  14. Federating LHCb datasets using the DIRAC File catalog

    CERN Document Server

    Haen, Christophe; Frank, Markus; Tsaregorodtsev, Andrei

    2015-01-01

    In the distributed computing model of LHCb the File Catalog (FC) is a central component that keeps track of each file and replica stored on the Grid. It is federating the LHCb data files in a logical namespace used by all LHCb applications. As a replica catalog, it is used for brokering jobs to sites where their input data is meant to be present, but also by jobs for finding alternative replicas if necessary. The LCG File Catalog (LFC) used originally by LHCb and other experiments is now being retired and needs to be replaced. The DIRAC File Catalog (DFC) was developed within the framework of the DIRAC Project and presented during CHEP 2012. From the technical point of view, the code powering the DFC follows an Aspect oriented programming (AOP): each type of entity that is manipulated by the DFC (Users, Files, Replicas, etc) is treated as a separate 'concern' in the AOP terminology. Hence, the database schema can also be adapted to the needs of a Virtual Organization. LHCb opted for a highly tuned MySQL datab...

  15. The Steward Observatory asteroid relational database

    Science.gov (United States)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  16. HCUP State Ambulatory Surgery Databases (SASD) - Restricted Access Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Ambulatory Surgery Databases (SASD) contain the universe of hospital-based ambulatory surgery encounters in participating States. Some States include...

  17. Databases in Cloud - Solutions for Developing Renewable Energy Informatics Systems

    Directory of Open Access Journals (Sweden)

    Adela BARA

    2017-08-01

    Full Text Available The paper presents the data model of a decision support prototype developed for generation monitoring, forecasting and advanced analysis in the renewable energy filed. The solutions considered for developing this system include databases in cloud, XML integration, spatial data representation and multidimensional modeling. This material shows the advantages of Cloud databases and spatial data representation and their implementation in Oracle Database 12 c. Also, it contains a data integration part and a multidimensional analysis. The presentation of output data is made using dashboards.

  18. Development of the geometry database for the CBM experiment

    Science.gov (United States)

    Akishina, E. P.; Alexandrov, E. I.; Alexandrov, I. N.; Filozova, I. A.; Friese, V.; Ivanov, V. V.

    2018-01-01

    The paper describes the current state of the Geometry Database (Geometry DB) for the CBM experiment. The main purpose of this database is to provide convenient tools for: (1) managing the geometry modules; (2) assembling various versions of the CBM setup as a combination of geometry modules and additional files. The CBM users of the Geometry DB may use both GUI (Graphical User Interface) and API (Application Programming Interface) tools for working with it.

  19. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    Science.gov (United States)

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  20. Development of subsurface drainage database system for use in environmental management issues

    International Nuclear Information System (INIS)

    Azhar, A.H.; Rafiq, M.; Alam, M.M.

    2007-01-01

    A simple user-friendly menue-driven system for database management pertinent to the Impact of Subsurface Drainage Systems on Land and Water Conditions (ISIAW) has been developed for use in environment-management issues of the drainage areas. This database has been developed by integrating four soft wares, viz; Microsoft Excel, MS Word Acrobat and MS Access. The information, in the form of tables and figures, with respect to various drainage projects has been presented in MS Word files. The major data-sets of various subsurface drainage projects included in the ISLaW database are: i) technical aspects, ii) groundwater and soil-salinity aspects, iii) socio-technical aspects, iv) agro-economic aspects, and v) operation and maintenance aspects. The various ISlAW file can be accessed just by clicking at the Menu buttons of the database system. This database not only gives feed back on the functioning of different subsurface drainage projects, with respect to the above-mentioned aspects, but also serves as a resource-document for these data for future studies on other drainage projects. The developed database-system is useful for planners, designers and Farmers Organisations for improved operation of existing drainage projects as well as development of future ones. (author)

  1. Description of the process used to create 1992 Hanford Morality Study database

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, E.S.; Buchanan, J.A.; Holter, N.A.

    1992-12-01

    An updated and expanded database for the Hanford Mortality Study has been developed by PNL`s Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL`s Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositions of radionuclides also maintained by PNL`s Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.

  2. Database for 238U inelastic scattering cross section evaluation

    International Nuclear Information System (INIS)

    Kanda, Yukinori; Fujikawa, Noboru; Kawano, Toshihiko

    1993-10-01

    There are discrepancies among evaluated neutron inelastic scattering cross sections for 238 U in the evaluated nuclear data files, JENDL-3, ENDF/B-VI, JEF-2, BROND-2 and CENDL-2. Re-evaluating them is internationally being discussed to obtain the best outcome which can be accepted in common at the present by experts in the world. This report has been compiled to review the discrepancies among the evaluations in the present data files and to provide a common database for the re-evaluation work (author)

  3. UnoViS: the MedIT public unobtrusive vital signs database.

    Science.gov (United States)

    Wartzek, Tobias; Czaplik, Michael; Antink, Christoph Hoog; Eilebrecht, Benjamin; Walocha, Rafael; Leonhardt, Steffen

    2015-01-01

    While PhysioNet is a large database for standard clinical vital signs measurements, such a database does not exist for unobtrusively measured signals. This inhibits progress in the vital area of signal processing for unobtrusive medical monitoring as not everybody owns the specific measurement systems to acquire signals. Furthermore, if no common database exists, a comparison between different signal processing approaches is not possible. This gap will be closed by our UnoViS database. It contains different recordings in various scenarios ranging from a clinical study to measurements obtained while driving a car. Currently, 145 records with a total of 16.2 h of measurement data is available, which are provided as MATLAB files or in the PhysioNet WFDB file format. In its initial state, only (multichannel) capacitive ECG and unobtrusive PPG signals are, together with a reference ECG, included. All ECG signals contain annotations by a peak detector and by a medical expert. A dataset from a clinical study contains further clinical annotations. Additionally, supplementary functions are provided, which simplify the usage of the database and thus the development and evaluation of new algorithms. The development of urgently needed methods for very robust parameter extraction or robust signal fusion in view of frequent severe motion artifacts in unobtrusive monitoring is now possible with the database.

  4. The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis The rice growth image file...s Data detail Data name The rice growth image files DOI 10.18908/lsdba.nbdc00945-004 Description of data contents The rice growth ima...ge files categorized based on file size. Data file File name: image files (director...y) File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/image...ite Policy | Contact Us The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive ...

  5. Retrieving high-resolution images over the Internet from an anatomical image database

    Science.gov (United States)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  6. Structure and representation of data elements on factual database - SIST activity in Japan

    International Nuclear Information System (INIS)

    Nakamoto, H.; Onodera, N.

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author)

  7. Structure and representation of data elements on factual database - SIST activity in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Nakamoto, H [Integrated Researches for Information Science, Tokyo (Japan); Onodera, N [Japan Information Center of Science and Technology, Tokyo (Japan)

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author).

  8. The Versatility of an Online Database for Spent Nuclear Fuel Management

    International Nuclear Information System (INIS)

    Canas, L.R.

    1997-12-01

    A vast and diverse database on spent nuclear fuel (SNF) supports the mission of the Westinghouse Savannah River Company's (WSRC) Spent Fuel Storage Division (SFSD) at the Department of Energy's (DOE) Savannah River Site (SRS) chemical-nuclear complex. Prior to 1994, this documentation resided in multiple files maintained by various organizations across SRS. Since that time, in an attempt to improve the efficiency of SNF data retrieval upon demand, the files have been substantially rearranged and consolidated. Moreover, selected data have been captured electronically in a web-style, online Spent Nuclear Fuel Database (SNFD) for quick and easy access from any personal computer on the SRS intranet. Originally released in August 1996, the SNFD has continued to expand at regular intervals commensurate with the SFSD mission

  9. JT-60 database system, 2

    International Nuclear Information System (INIS)

    Itoh, Yasuhiro; Kurihara, Kenichi; Kimura, Toyoaki.

    1987-07-01

    The JT-60 central control system, ''ZENKEI'' collects the control and instrumentation data relevant to discharge and device status data for plant monitoring. The former of the engineering data amounts to about 3 Mbytes per shot of discharge. The ''ZENKEI'' control system which consists of seven minicomputers for on-line real-time control has little performance of handling such a large amount of data for physical and engineering analysis. In order to solve this problem, it was planned to establish the experimental database on the Front-end Processor (FEP) of general purpose large computer in JAERI Computer Center. The database management system (DBMS), therefore, has been developed for creating the database during the shot interval. The engineering data are shipped up from ''ZENKEI'' to FEP through the dedicated communication line after the shot. The hierarchical data model has been adopted in this database, which consists of the data files with tree structure of three keys of system, discharge type and shot number. The JT-60 DBMS provides the data handling packages of subroutines for interfacing the database with user's application programs. The subroutine packages for supporting graphic processing and the function of access control for security of the database are also prepared in this DBMS. (author)

  10. Bibliographical database of radiation biological dosimetry and risk assessment: Part 2

    International Nuclear Information System (INIS)

    Straume, T.; Ricker, Y.; Thut, M.

    1990-09-01

    This is part 11 of a database constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on authors, key words, title, year, journal name, or publication number. Photocopies of the publications contained in the database are maintained in a file that is numerically arranged by our publication acquisition numbers. This volume contains 1048 additional entries, which are listed in alphabetical order by author. The computer software used for the database is a simple but sophisticated relational database program that permits quick information access, high flexibility, and the creation of customized reports. This program is inexpensive and is commercially available for the Macintosh and the IBM PC. Although the database entries were made using a Macintosh computer, we have the capability to convert the files into the IBM PC version. As of this date, the database cites 2260 publications. Citations in the database are from 200 different scientific journals. There are also references to 80 books and published symposia, and 158 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed within the scientific literature, although a few journals clearly predominate. The journals publishing the largest number of relevant papers are Health Physics, with a total of 242 citations in the database, and Mutation Research, with 185 citations. Other journals with over 100 citations in the database, are Radiation Research, with 136, and International Journal of Radiation Biology, with 132

  11. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  12. Development of reliability databases and the particular requirements of probabilistic risk analyses

    International Nuclear Information System (INIS)

    Meslin, T.

    1989-01-01

    Nuclear utilities have an increasing need to develop reliability databases for their operating experience. The purposes of these databases are often multiple, including both equipment maintenance aspects and probabilistic risk analyses. EDF has therefore been developing experience feedback databases, including the Reliability Data Recording System (SRDF) and the Event File, as well as the history of numerous operating documents. Furthermore, since the end of 1985, EDF has been preparing a probabilistic safety analysis applied to one 1,300 MWe unit, for which a large amount of data of French origin is necessary. This data concerns both component reliability parameters and initiating event frequencies. The study has thus been an opportunity for trying out the performance databases for a specific application, as well as in-depth audits of a number of nuclear sites to make it possible to validate numerous results. Computer aided data collection is also on trial in a number of plants. After describing the EDF operating experience feedback files, we discuss the particular requirements of probabilistic risk analyses, and the resources implemented by EDF to satisfy them. (author). 5 refs

  13. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  14. (reprocessed)pooled_ctss - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...e: (reprocessed)pooled_ctss (Homo sapiens) File URL: ftp://ftp.biosciencedbc.jp/archive/fantom5/datafiles/re...) File URL: ftp://ftp.biosciencedbc.jp/archive/fantom5/datafiles/reprocessed/mm10...ory of This Database Site Policy | Contact Us (reprocessed)pooled_ctss - FANTOM5 | LSDB Archive ...

  15. Conformation analysis - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ConfC Conformation analysis Data detail Data name Conformation analysis DOI 10.18908/lsdba.n...bdc00400-005 Description of data contents Results of conformation analysis for PDB files (raw data) Each res...ile size: 63.9 MB Simple search URL - Data acquisition method - Data analysis method - Number of data entrie...s 352 entries - About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Conformation analysis - ConfC | LSDB Archive ...

  16. Design and realization of reports of database in VC++. net

    International Nuclear Information System (INIS)

    Zhu Haijun; Shen Liren; Liu Dekang

    2006-01-01

    The design and realization of reports of database on the basis of VC ++ . net is presented. In the first, report template using word format files is introduced, and the method of filling the data table up according to the database is expatiated. The function of preview and printing calling word software is analyzed. The key code of how to generate reports automatically with Visual C ++ . net is given. (authors)

  17. [Construction of chemical information database based on optical structure recognition technique].

    Science.gov (United States)

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research

  18. Description of the process used to create 1992 Hanford Morality Study database

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, E. S.; Buchanan, J. A.; Holter, N. A.

    1992-12-01

    An updated and expanded database for the Hanford Mortality Study has been developed by PNL's Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL's Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositions of radionuclides also maintained by PNL's Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.

  19. U.S. EPA River Reach File Version 1.0

    Data.gov (United States)

    Kansas Data Access and Support Center — Reach File Version 1.0 (RF1) is a vector database of approximately 700,000 miles of streams and open waters in the conterminous United States. It is used extensively...

  20. Geophysical log database for the Floridan aquifer system and southeastern Coastal Plain aquifer system in Florida and parts of Georgia, Alabama, and South Carolina

    Science.gov (United States)

    Williams, Lester J.; Raines, Jessica E.; Lanning, Amanda E.

    2013-04-04

    A database of borehole geophysical logs and other types of data files were compiled as part of ongoing studies of water availability and assessment of brackish- and saline-water resources. The database contains 4,883 logs from 1,248 wells in Florida, Georgia, Alabama, South Carolina, and from a limited number of offshore wells of the eastern Gulf of Mexico and the Atlantic Ocean. The logs can be accessed through a download directory organized by state and county for onshore wells and in a single directory for the offshore wells. A flat file database is provided that lists the wells, their coordinates, and the file listings.

  1. Successful linking of the Society of Thoracic Surgeons Database to Social Security data to examine the accuracy of Society of Thoracic Surgeons mortality data.

    Science.gov (United States)

    Jacobs, Jeffrey P; O'Brien, Sean M; Shahian, David M; Edwards, Fred H; Badhwar, Vinay; Dokholyan, Rachel S; Sanchez, Juan A; Morales, David L; Prager, Richard L; Wright, Cameron D; Puskas, John D; Gammie, James S; Haan, Constance K; George, Kristopher M; Sheng, Shubin; Peterson, Eric D; Shewan, Cynthia M; Han, Jane M; Bongiorno, Phillip A; Yohe, Courtney; Williams, William G; Mayer, John E; Grover, Frederick L

    2013-04-01

    The Society of Thoracic Surgeons Adult Cardiac Surgery Database has been linked to the Social Security Death Master File to verify "life status" and evaluate long-term surgical outcomes. The objective of this study is explore practical applications of the linkage of the Society of Thoracic Surgeons Adult Cardiac Surgery Database to Social Securtiy Death Master File, including the use of the Social Securtiy Death Master File to examine the accuracy of the Society of Thoracic Surgeons 30-day mortality data. On January 1, 2008, the Society of Thoracic Surgeons Adult Cardiac Surgery Database began collecting Social Security numbers in its new version 2.61. This study includes all Society of Thoracic Surgeons Adult Cardiac Surgery Database records for operations with nonmissing Social Security numbers between January 1, 2008, and December 31, 2010, inclusive. To match records between the Society of Thoracic Surgeons Adult Cardiac Surgery Database and the Social Security Death Master File, we used a combined probabilistic and deterministic matching rule with reported high sensitivity and nearly perfect specificity. Between January 1, 2008, and December 31, 2010, the Society of Thoracic Surgeons Adult Cardiac Surgery Database collected data for 870,406 operations. Social Security numbers were available for 541,953 operations and unavailable for 328,453 operations. According to the Society of Thoracic Surgeons Adult Cardiac Surgery Database, the 30-day mortality rate was 17,757/541,953 = 3.3%. Linkage to the Social Security Death Master File identified 16,565 cases of suspected 30-day deaths (3.1%). Of these, 14,983 were recorded as 30-day deaths in the Society of Thoracic Surgeons database (relative sensitivity = 90.4%). Relative sensitivity was 98.8% (12,863/13,014) for suspected 30-day deaths occurring before discharge and 59.7% (2120/3551) for suspected 30-day deaths occurring after discharge. Linkage to the Social Security Death Master File confirms the accuracy of

  2. TIGER/Line Shapefile, 2011, Series Information File for the 2010 Census Traffic Analysis Zone (TAZ) State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  3. [PVFS 2000: An operational parallel file system for Beowulf

    Science.gov (United States)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  4. Development of EDFSRS: evaluated data files storage and retrieval system

    International Nuclear Information System (INIS)

    Hasegawa, Akira

    1985-07-01

    EDFSRS: Evaluated Data Files Storage and Retrieval System has been developed, which is a complete service system for the evaluated nuclear data files compiled in the major three formats: ENDF/B, UKNDL and KEDAK. This system intends to give efficient loading and maintenance of evaluated nuclear data files to the data base administrators and efficient retrievals to their users not only with the easiness but with the best confidence. It can give users all of the information available in these major three formats. The system consists of more than fifteen independent programs and some 150 Mega byte data files and index files (data-base) of the loaded data. In addition it is designed to be operated in the on-line TSS (Time Sharing System) mode, so that users can get any information from their desk top terminals. This report is prepared as a reference manual of the EDFSRS. (author)

  5. From Passive to Active in the Design of External Radiotherapy Database at Oncology Institute

    Directory of Open Access Journals (Sweden)

    Valentin Ioan CERNEA

    2009-12-01

    Full Text Available Implementation during 1997 of a computer network at Oncology Institute “Prof. Dr. Ion Chiricuţă" from Cluj-Napoca (OICN opens the era of patient electronic file where the presented database is included. The database developed before 2000, used till December 2006 in all reports of OICN has collected data from primary documents as radiotherapy files. Present level of the computer network permits to change the sense of data from computer to primary document. Now the primary document is built firstly electronically inside the computer, and secondly, after validation is printed as a known document. The paper discusses the issues concerning safety, functionality and access derived.

  6. An Implementation of a Database System for Book Loan in an ...

    African Journals Online (AJOL)

    A Case Study of the Polytechnic, Ibadan Library) ... the deletion, updating and query operations. Reports can be generated using report generator incorporated into the system. Key Words: Database, Book, loan, Academic, Library System, File ...

  7. Secure Distributed Databases Using Cryptography

    OpenAIRE

    Ion IVAN; Cristian TOMA

    2006-01-01

    The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Que...

  8. Files synchronization from a large number of insertions and deletions

    Science.gov (United States)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  9. Pursuit of a scalable high performance multi-petabyte database

    CERN Document Server

    Hanushevsky, A

    1999-01-01

    When the BaBar experiment at the Stanford Linear Accelerator Center starts in April 1999, it will generate approximately 200 TB/year of data at a rate of 10 MB/sec for 10 years. A mere six years later, CERN, the European Laboratory for Particle Physics, will start an experiment whose data storage requirements are two orders of magnitude larger. In both experiments, all of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). The quantity and rate at which the data is produced requires the use of a high performance hierarchical mass storage system in place of a standard Unix file system. Furthermore, the distributed nature of the experiment, involving scientists from 80 Institutions in 10 countries, also requires an extended security infrastructure not commonly found in standard Unix file systems. The combination of challenges that must be overcome in order to effectively deal with a multi-petabyte object oriented database is substantial. Our particular approach...

  10. REDIdb: the RNA editing database.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  11. Computerized index for teaching files

    International Nuclear Information System (INIS)

    Bramble, J.M.

    1989-01-01

    A computerized index can be used to retrieve cases from a teaching file that have radiographic findings similar to an unknown case. The probability that a user will review cases with a correct diagnosis was estimated with use of radiographic findings of arthritis in hand radiographs of 110 cases from a teaching file. The nearest-neighbor classification algorithm was used as a computer index to 110 cases of arthritis. Each case was treated as an unknown and inputted to the computer index. The accuracy of the computer index in retrieving cases with the same diagnosis (including rheumatoid arthritis, gout, psoriatic arthritis, inflammatory osteoarthritis, and pyrophosphate arthropathy) was measured. A Bayes classifier algorithm was also tested on the same database. Results are presented. The nearest-neighbor algorithm was 83%. By comparison, the estimated accuracy of the Bayes classifier algorithm was 78%. Conclusions: A computerized index to a teaching file based on the nearest-neighbor algorithm should allow the user to review cases with the correct diagnosis of an unknown case, by entering the findings of the unknown case

  12. The virtual microscopy database-sharing digital microscope images for research and education.

    Science.gov (United States)

    Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael

    2018-02-14

    Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  13. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  14. Simple re-instantiation of small databases using cloud computing.

    Science.gov (United States)

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  15. FORTRAN data files transference from VAX/VMS to ALPHA/UNIX; Traspaso de ficheros FORTRAN de datos de VAX/VMS a ALPHA/UNIX

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E.; Milligen, B. Ph van [CIEMAT (Spain)

    1997-09-01

    Several tools have been developed to access the TJ-IU databases, which currently reside in VAX/VMS servers, from the TJ-II Data Acquisition System DEC ALPHA 8400 server. The TJ-I/TJ-IU databases are not homogeneous and contain several types of data files, namely, SADE, CAMAC and FORTRAN unformatted files. The tools presented in this report allow one to transfer CAMAC and those FORTRAN unformatted files defined herein, from a VAX/VMS server, for data manipulation on the ALPHA/Digital UNIX server. (Author)

  16. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills.

  17. Protein - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...p_atlas_protein.zip File URL: ftp://ftp.biosciencedbc.jp/archive/tp_atlas/LATEST/...story of This Database Site Policy | Contact Us Protein - TP Atlas | LSDB Archive ...

  18. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  19. The HITRAN 2004 molecular spectroscopic database

    Energy Technology Data Exchange (ETDEWEB)

    Rothman, L.S. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States)]. E-mail: lrothman@cfa.harvard.edu; Jacquemart, D. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States); Barbe, A. [Universite de Reims-Champagne-Ardenne, Groupe de Spectrometrie Moleculaire et Atmospherique, 51062 Reims (France)] (and others)

    2005-12-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing.

  20. The HITRAN 2004 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Jacquemart, D.; Barbe, A.

    2005-01-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing

  1. Update of the androgen receptor gene mutations database.

    Science.gov (United States)

    Gottlieb, B; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1999-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 309 to 374 during the past year. We have expanded the database by adding information on AR-interacting proteins; and we have improved the database by identifying those mutation entries that have been updated. Mutations of unknown significance have now been reported in both the 5' and 3' untranslated regions of the AR gene, and in individuals who are somatic mosaics constitutionally. In addition, single nucleotide polymorphisms, including silent mutations, have been discovered in normal individuals and in individuals with male infertility. A mutation hotspot associated with prostatic cancer has been identified in exon 5. The database is available on the internet (http://www.mcgill.ca/androgendb/), from EMBL-European Bioinformatics Institute (ftp.ebi.ac.uk/pub/databases/androgen), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca). Copyright 1999 Wiley-Liss, Inc.

  2. Database applications in high energy physics

    International Nuclear Information System (INIS)

    Jeffery, K.G.

    1982-01-01

    High Energy physicists were using computers to process and store their data early in the history of computing. They addressed problems of memory management, job control, job generation, data standards, file conventions, multiple simultaneous usage, tape file handling and data management earlier than, or at the same time as, the manufacturers of computing equipment. The HEP community have their own suites of programs for these functions, and are now turning their attention to the possibility of replacing some of the functional components of their 'homebrew' systems with more widely used software and/or hardware. High on the 'shopping list' for replacement is data management. ECFA Working Group 11 has been working on this problem. This paper reviews the characteristics of existing HEP systems and existing database systems and discusses the way forward. (orig.)

  3. The version control service for ATLAS data acquisition configuration files

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files [1]. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications pro...

  4. The JANA calibrations and conditions database API

    International Nuclear Information System (INIS)

    Lawrence, David

    2010-01-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  5. The JANA calibrations and conditions database API

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, David, E-mail: davidl@jlab.or [12000 Jefferson Ave., Suite 8, Newport News, VA 23601 (United States)

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  6. (reprocessed)CAGE_peaks_expression - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM...sciencedbc.jp/archive/fantom5/datafiles/reprocessed/hg38_latest/extra/CAGE_peaks_expression/ File size: 3.3 ...tp.biosciencedbc.jp/archive/fantom5/datafiles/reprocessed/mm10_latest/extra/CAGE_peaks_expression/ File size...f This Database Site Policy | Contact Us (reprocessed)CAGE_peaks_expression - FANTOM5 | LSDB Archive ...

  7. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    Science.gov (United States)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  8. Work orders management based on XML file in printing

    Directory of Open Access Journals (Sweden)

    Ran Peipei

    2018-01-01

    Full Text Available The Extensible Markup Language (XML technology is increasingly used in various field, if it’s used to express the information of work orders will improve efficiency for management and production. According to the features, we introduce the technology of management for work orders and get a XML file through the Document Object Model (DOM technology in the paper. When we need the information to conduct production, parsing the XML file and save the information in database, this is beneficial to the preserve and modify for information.

  9. Array patterns and clones - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Array patterns and clones Data detail Data name Array patterns and clones DOI 10.18908/...lsdba.nbdc00194-002 Description of data contents Static files of array patterns and cDNA clones. Data file F...h rice cDNA comprises a pair of glass slides. The microarray patterns are shown i...escription Download License Update History of This Database Site Policy | Contact Us Array patterns and clones - RMOS | LSDB Archive ...

  10. Scaling up ATLAS Database Release Technology for the LHC Long Run

    International Nuclear Information System (INIS)

    Borodin, M; Nevski, P; Vaniachine, A

    2011-01-01

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the 'live' Oracle server. Database Release technology fully satisfies the requirements of ATLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  11. TIGER/Line Shapefile, 2010, Series Information File for the 2010 Census Block State-based Shapefile with Housing and Population Data

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  12. Main - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...me: tp_atlas_en.zip File URL: ftp://ftp.biosciencedbc.jp/archive/tp_atlas/LATEST/...d License Update History of This Database Site Policy | Contact Us Main - TP Atlas | LSDB Archive ...

  13. NURE [National Uranium Resource Evaluation] HSSR [Hydrogeochemical and Stream Sediment Reconnaissance] Introduction to Data Files, United States: Volume 1

    International Nuclear Information System (INIS)

    1985-01-01

    One product of the Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program, a component of the National Uranium Resource Evaluation (NURE), is a data-base of interest to scientists and professionals in the academic, business, industrial, and governmental communities. This database contains individual records for water and sediment samples taken during the reconnaissance survey of the entire United States, excluding Hawaii. The purpose of this report is to describe the NURE HSSR data by highlighting its key characteristics and providing user guides to the data. A companion report, ''A Technical History of the NURE HSSR Program,'' summarizes those aspects of the HSSR Program which are likely to be important in helping users understand the database. Each record on the database contains varying information on general field or site characteristics and analytical results for elemental concentrations in the sample; the database is potentially valuable for describing the geochemistry of specified locations and addressing issues or questions in other areas such as water quality, geoexploration, and hydrologic studies. This report is organized in twelve volumes. This first volume presents a brief history of the NURE HSSR program, a description of the data files produced by ISP, a Users' Dictionary for the Analysis File and graphs showing the distribution of elemental concentrations for sediments at the US level. Volumes 2 through 12 are comprised of Data Summary Tables displaying the percentile distribution of the elemental concentrations on the file. Volume 2 contains data for the individual states. Volumes 3 through 12 contain data for the 1 0 x 2 0 quadrangles, organized into eleven regional files; the data for the two regional files for Alaska (North and South) are bound together as Volume 12

  14. Combined use of chemical, biochemical and physiological variables in mussels for the assessment of marine pollution along the N-NW Spanish coast.

    Science.gov (United States)

    Bellas, Juan; Albentosa, Marina; Vidal-Liñán, Leticia; Besada, Victoria; Franco, M Ángeles; Fumega, José; González-Quijano, Amelia; Viñas, Lucía; Beiras, Ricardo

    2014-05-01

    This study undertakes an overall assessment of pollution in a large region (over 2500 km of coastline) of the N-NW Spanish coast, by combining the use of biochemical (AChE, GST, GPx) and physiological (SFG) responses to pollution, with chemical analyses in wild mussel populations (Mytilus galloprovincialis). The application of chemical analysis and biological techniques identified polluted sites and quantified the level of toxicity. High levels of pollutants were found in mussel populations located close to major cities and industrialized areas and, in general, average concentrations were higher in the Cantabrian than in the Iberian Atlantic coast. AChE activities ranged between 5.8 and 27.1 nmol/min/mg prot, showing inhibition in 12 sampling sites, according to available ecotoxicological criteria. GST activities ranged between 29.5 and 112.7 nmol/min/mg prot, and extreme variability was observed in GPx, showing activities between 2.6 and 64.5 nmol/min/mg prot. Regarding SFG, only 5 sites showed 'moderate stress' (SFG value below 20 J/g/h), and most sites presented a 'high potential growth' (>35 J/g/h) corresponding to a 'healthy state'. Multivariate statistical techniques applied to the chemical and biological data identified PCBs, organochlorine pesticides and BDEs as the main responsible of the observed toxicity. However, the alteration of biological responses caused by pollutants seems to be, in general, masked by biological variables, namely age and mussel condition, which have an effect on the mussels' response to pollutant exposure. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    Science.gov (United States)

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  16. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  17. Automatic generation of configuration files for a distributed control system

    CERN Document Server

    Cupérus, J

    1995-01-01

    The CERN PS accelerator complex is composed of 9 interlinked accelerators for production and acceleration of various kinds of particles. The hardware is controlled through CAMAC, VME, G64, and GPIB modules, which in turn are controlled by more than 100 microprocessors in VME crates. To produce startup files for all these microprocessors, with the correct drivers, programs and parameters in each of them, is quite a challenge. The problem is solved by generating the startup files automatically from the description of the control system in a relational database. The generation process detects inconsistencies and incomplete information. Included in the startup files are data which are formally comments, but can be interpreted for run-time checking of interface modules and program activity.

  18. Switching the Fermilab Accelerator Control System to a relational database

    International Nuclear Information System (INIS)

    Shtirbu, S.

    1993-01-01

    The accelerator control system (open-quotes ACNETclose quotes) at Fermilab is using a made-in-house, Assembly language, database. The database holds device information, which is mostly used for finding out how to read/set devices and how to interpret alarms. This is a very efficient implementation, but it lacks the needed flexibility and forces applications to store data in private/shared files. This database is being replaced by an off-the-shelf relational database (Sybase 2 ). The major constraints on switching are the necessity to maintain/improve response time and to minimize changes to existing applications. Innovative methods are used to help achieve the required performance, and a layer seven gateway simulates the old database for existing programs. The new database is running on a DEC ALPHA/VMS platform, and provides better performance. The switch is also exposing problems with the data currently stored in the database, and is helping in cleaning up erroneous data. The flexibility of the new relational database is going to facilitate many new applications in the future (e.g. a 3D presentation of device location). The new database is expected to fully replace the old database during this summer's shutdown

  19. Developing a stone database for clinical practice.

    Science.gov (United States)

    Turney, Benjamin W; Noble, Jeremy G; Reynard, John M

    2011-09-01

    Our objective was to design an intranet-based database to streamline stone patient management and data collection. The system developers used a rapid development approach that removed the need for laborious and unnecessary documentation, instead focusing on producing a rapid prototype that could then be altered iteratively. By using open source development software and website best practice, the development cost was kept very low in comparison with traditional clinical applications. Information about each patient episode can be entered via a user-friendly interface. The bespoke electronic stone database removes the need for handwritten notes, dictation, and typing. From the database, files may be automatically generated for clinic letters, operation notes. and letters to family doctors. These may be printed or e-mailed from the database. Data may be easily exported for audits, coding, and research. Data collection remains central to medical practice, to improve patient safety, to analyze medical and surgical outcomes, and to evaluate emerging treatments. Establishing prospective data collection is crucial to this process. In the current era, we have the opportunity to embrace available technology to facilitate this process. The database template could be modified for use in other clinics. The database that we have designed helps to provide a modern and efficient clinical stone service.

  20. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  1. Clone - ClEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...le URL: ftp://ftp.biosciencedbc.jp/archive/clest/LATEST/clest_clone.zip File size: 660 KB Simple search URL ...ion Download License Update History of This Database Site Policy | Contact Us Clone - ClEST | LSDB Archive ...

  2. HATCHES - a thermodynamic database and management system

    International Nuclear Information System (INIS)

    Cross, J.E.; Ewart, F.T.

    1990-03-01

    The Nirex Safety Assessment Research Programme has been compiling the thermodynamic data necessary to allow simulations of the aqueous behaviour of the elements important to radioactive waste disposal to be made. These data have been obtained from the literature, when available, and validated for the conditions of interest by experiment. In order to maintain these data in an accessible form and to satisfy quality assurance on all data used for assessments, a database has been constructed which resides on a personal computer operating under MS-DOS using the Ashton-Tate dBase III program. This database contains all the input data fields required by the PHREEQE program and, in addition, a body of text which describes the source of the data and the derivation of the PHREEQE input parameters from the source data. The HATCHES system consists of this database, a suite of programs to facilitate the searching and listing of data and a further suite of programs to convert the dBase III files to PHREEQE database format. (Author)

  3. The plasma movie database system for JT-60

    International Nuclear Information System (INIS)

    Sueoka, Michiharu; Kawamata, Yoichi; Kurihara, Kenichi; Seki, Akiyuki

    2007-01-01

    The real-time plasma movie with the computer graphics (CG) of plasma shape is one of the most effective methods to know what discharge have been made in the experiment. For an easy use of the movie in the data analysis, we have developed the plasma movie database system (PMDS), which automatically records plasma movie according to the JT-60 discharge sequence, and transfers the movie files on request from the web site. The file is compressed to about 8 MB/shot small enough to be transferred within a few seconds through local area network (LAN). In this report, we describe the developed system from the technical point of view, and discuss a future plan on the basis of advancing video technology

  4. TIGER/Line Shapefile, 2013, county, Clark County, NV, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  5. TIGER/Line Shapefile, 2013, county, Iowa County, IA, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  6. TIGER/Line Shapefile, 2013, county, Weld County, CO, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  7. TIGER/Line Shapefile, 2013, county, Dutchess County, NY, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  8. TIGER/Line Shapefile, 2013, county, Walker County, TX, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  9. TIGER/Line Shapefile, 2013, Series Information File for the Current All Lines Shapefiles

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  10. TIGER/Line Shapefile, 2013, county, Ballard County, KY, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  11. TIGER/Line Shapefile, 2013, county, Houston County, MN, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  12. TIGER/Line Shapefile, 2017, nation, U.S., Topological Faces-Military Installation Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  13. TIGER/Line Shapefile, 2013, county, Appling County, GA, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  14. TIGER/Line Shapefile, 2013, county, Nantucket County, MA, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  15. TIGER/Line Shapefile, 2013, county, Jackson County, OR, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  16. TIGER/Line Shapefile, 2016, nation, U.S., Topological Faces-Military Installation Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  17. TIGER/Line Shapefile, 2013, county, Lafayette County, MS, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  18. TIGER/Line Shapefile, 2013, county, Warren County, MS, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  19. TIGER/Line Shapefile, 2013, county, Clay County, MS, Current Address Ranges Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  20. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  1. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  2. A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Jones, M.H.

    1999-11-24

    To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO{sub 2} levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO{sub 2}-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP-072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO{sub 2}-exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO{sub 2}-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS{reg_sign} and Fortran codes to read the ASCII data file). The data files and this documentation are available without charge on a variety of media and via the Internet from CDIAC.

  3. Database of ligand-induced domain movements in enzymes

    Directory of Open Access Journals (Sweden)

    Hayward Steven

    2009-03-01

    Full Text Available Abstract Background Conformational change induced by the binding of a substrate or coenzyme is a poorly understood stage in the process of enzyme catalysed reactions. For enzymes that exhibit a domain movement, the conformational change can be clearly characterized and therefore the opportunity exists to gain an understanding of the mechanisms involved. The development of the non-redundant database of protein domain movements contains examples of ligand-induced domain movements in enzymes, but this valuable data has remained unexploited. Description The domain movements in the non-redundant database of protein domain movements are those found by applying the DynDom program to pairs of crystallographic structures contained in Protein Data Bank files. For each pair of structures cross-checking ligands in their Protein Data Bank files with the KEGG-LIGAND database and using methods that search for ligands that contact the enzyme in one conformation but not the other, the non-redundant database of protein domain movements was refined down to a set of 203 enzymes where a domain movement is apparently triggered by the binding of a functional ligand. For these cases, ligand binding information, including hydrogen bonds and salt-bridges between the ligand and specific residues on the enzyme is presented in the context of dynamical information such as the regions that form the dynamic domains, the hinge bending residues, and the hinge axes. Conclusion The presentation at a single website of data on interactions between a ligand and specific residues on the enzyme alongside data on the movement that these interactions induce, should lead to new insights into the mechanisms of these enzymes in particular, and help in trying to understand the general process of ligand-induced domain closure in enzymes. The website can be found at: http://www.cmp.uea.ac.uk/dyndom/enzymeList.do

  4. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  5. Geoscientific (GEO) database of the Andra Meuse / Haute-Marne research center

    International Nuclear Information System (INIS)

    Tabani, P.; Hemet, P.; Hermand, G.; Delay, J.; Auriere, C.

    2010-01-01

    Document available in extended abstract form only. The GEO database (geo-scientific database of the Meuse/Haute-Marne Center) is a tool developed by Andra, with a view to group in a secured computer form all data related to the acquisition of in situ and laboratory measurements made on solid and fluid samples. This database has three main functions: - Acquisition and management of data and computer files related to geological, geomechanical, hydrogeological and geochemical measurements on solid and fluid samples and in situ measurements (logging, on sample measurements, geological logs, etc). - Available consultation by the staff on Andra's intranet network for selective viewing of data linked to a borehole and/or a sample and for making computations and graphs on sets of laboratory measurements related to a sample. - Physical management of fluid and solid samples stored in a 'core library' in order to localize a sample, follow-up its movement out of the 'core library' to an organization, and carry out regular inventories. The GEO database is a relational Oracle data base. It is installed on a data server which stores information and manages the users' transactions. The users can consult, download and exploit data from any computer connected to the Andra network or Internet. Management of the access rights is made through a login/ password. Four geo-scientific explanations are linked to the Geo database, they are: - The Geosciences portal: The Geosciences portal is a web Intranet application accessible from the ANDRA network. It does not require a particular installation from the client and is accessible through the Internet navigator. A SQL Server Express database manages the users and access rights to the application. This application is used for the acquisition of hydrogeological and geochemical data collected on the field and on fluid samples, as well as data related to scientific work carried out at surface level or in drifts

  6. Digitizing Olin Eggen's Card Database

    Science.gov (United States)

    Crast, J.; Silvis, G.

    2017-06-01

    The goal of the Eggen Card Database Project is to recover as many of the photometric observations from Olin Eggen's Card Database as possible and preserve these observations, in digital forms that are accessible by anyone. Any observations of interest to the AAVSO will be added to the AAVSO International Database (AID). Given to the AAVSO on long-term loan by the Cerro Tololo Inter-American Observatory, the database is a collection of over 78,000 index cards holding all Eggen's observations made between 1960 and 1990. The cards were electronically scanned and the resulting 108,000 card images have been published as a series of 2,216 PDF files, which are available from the AAVSO web site. The same images are also stored in an AAVSO online database where they are indexed by star name and card content. These images can be viewed using the eggen card portal online tool. Eggen made observations using filter bands from five different photometric systems. He documented these observations using 15 different data recording formats. Each format represents a combination of filter magnitudes and color indexes. These observations are being transcribed onto spreadsheets, from which observations of value to the AAVSO are added to the AID. A total of 506 U, B, V, R, and I observations were added to the AID for the variable stars S Car and l Car. We would like the reader to search through the card database using the eggen card portal for stars of particular interest. If such stars are found and retrieval of the observations is desired, e-mail the authors, and we will be happy to help retrieve those data for the reader.

  7. TIGER/Line Shapefile, 2013, Series Information File for the All Roads County-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  8. TIGER/Line Shapefile, 2013, Series Information File for the Current Place State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  9. TIGER/Line Shapefile, 2012, Series Information File for the Current Secondary School Districts Shapefiles

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  10. Series Information File for the 2017 TIGER/Line Shapefile, All Roads County-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  11. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    International Nuclear Information System (INIS)

    2011-01-01

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule≥3 mm,''''nodule<3 mm,'' and ''non-nodule≥3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule≥3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all

  12. Nuclear reaction database on Meme Media

    Energy Technology Data Exchange (ETDEWEB)

    Ohbayashi, Yoshihide; Masui, Hiroshi [Meme Media Laboratory, Hokkaido University, Sapporo, Hokkaido (Japan); Aoyama, Shigeyoshi [Information Processing Center, Kitami Institute of Technology, Kitami, Hokkaido (Japan); Kato, Kiyoshi [Division of Physics, Graduate School of Science, Hokkaido Univ., Sapporo, Hokkaido (Japan); Chiba, Masaki [Division of Social Information, Sapporo Gakuin University, Ebetsu, Hokkaido (Japan)

    2000-03-01

    We have developed the system of charged particle nuclear reaction data (CPND) on the IntelligentPad architecture. We called the system CONTIP, which is an abbreviation of 'Creative, Cooperative and Cultural Objects for Nuclear data and Tools'. NRDF (Nuclear Reaction Data File), which is a kind of CPND compilation, is applied as an application example. Although CONTIP is currently applied to NRDF, the framework can be generalized to use the othernuclear database. We will develop CONTIP to give the framework for effective utilization of nuclear data. (author)

  13. Nuclear reaction database on Meme Media

    International Nuclear Information System (INIS)

    Ohbayashi, Yoshihide; Masui, Hiroshi; Aoyama, Shigeyoshi; Kato, Kiyoshi; Chiba, Masaki

    2000-01-01

    We have developed the system of charged particle nuclear reaction data (CPND) on the IntelligentPad architecture. We called the system CONTIP, which is an abbreviation of 'Creative, Cooperative and Cultural Objects for Nuclear data and Tools'. NRDF (Nuclear Reaction Data File), which is a kind of CPND compilation, is applied as an application example. Although CONTIP is currently applied to NRDF, the framework can be generalized to use the other nuclear database. We will develop CONTIP to give the framework for effective utilization of nuclear data. (author)

  14. Two Search Techniques within a Human Pedigree Database

    OpenAIRE

    Gersting, J. M.; Conneally, P. M.; Rogers, K.

    1982-01-01

    This paper presents the basic features of two search techniques from MEGADATS-2 (MEdical Genetics Acquisition and DAta Transfer System), a system for collecting, storing, retrieving and plotting human family pedigrees. The individual search provides a quick method for locating an individual in the pedigree database. This search uses a modified soundex coding and an inverted file structure based on a composite key. The navigational search uses a set of pedigree traversal operations (individual...

  15. A Magnetic Petrology Database for Satellite Magnetic Anomaly Interpretations

    Science.gov (United States)

    Nazarova, K.; Wasilewski, P.; Didenko, A.; Genshaft, Y.; Pashkevich, I.

    2002-05-01

    anomaly, tectonic structure, geographical location, rock type, magnetic properties, chemistry and reference, see http://core2.gsfc.nasa.gov/terr_mag/query1.html. The output of database is HTML structured table, text file, and downloadable file. This database will be very useful for studies of lithospheric satellite magnetic anomalies on the Earth and other terrestrial planets.

  16. Study on Big Database Construction and its Application of Sample Data Collected in CHINA'S First National Geographic Conditions Census Based on Remote Sensing Images

    Science.gov (United States)

    Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.

    2018-04-01

    In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.

  17. Decay data file based on the ENSDF file

    Energy Technology Data Exchange (ETDEWEB)

    Katakura, J. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    A decay data file with the JENDL (Japanese Evaluated Nuclear Data Library) format based on the ENSDF (Evaluated Nuclear Structure Data File) file was produced as a tentative one of special purpose files of JENDL. The problem using the ENSDF file as primary source data of the JENDL decay data file is presented. (author)

  18. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  19. Risk assessment and toxicology databases for health effects assessment

    Energy Technology Data Exchange (ETDEWEB)

    Lu, P.Y.; Wassom, J.S. [Oak Ridge National Laboratory, TN (United States)

    1990-12-31

    Scientific and technological developments bring unprecedented stress to our environment. Society has to predict the results of potential health risks from technologically based actions that may have serious, far-reaching consequences. The potential for error in making such predictions or assessment is great and multiplies with the increasing size and complexity of the problem being studied. Because of this, the availability and use of reliable data is the key to any successful forecasting effort. Scientific research and development generate new data and information. Much of the scientific data being produced daily is stored in computers for subsequent analysis. This situation provides both an invaluable resource and an enormous challenge. With large amounts of government funds being devoted to health and environmental research programs and with maintenance of our living environment at stake, we must make maximum use of the resulting data to forecast and avert catastrophic effects. Along with the readily available. The most efficient means of obtaining the data necessary for assessing the health effects of chemicals is to utilize applications include the toxicology databases and information files developed at ORNL. To make most efficient use of the data/information that has already been prepared, attention and resources should be directed toward projects that meticulously evaluate the available data/information and create specialized peer-reviewed value-added databases. Such projects include the National Library of Medicine`s Hazardous Substances Data Bank, and the U.S. Air Force Installation Restoration Toxicology Guide. These and similar value-added toxicology databases were developed at ORNL and are being maintained and updated. These databases and supporting information files, as well as some data evaluation techniques are discussed in this paper with special focus on how they are used to assess potential health effects of environmental agents. 19 refs., 5 tabs.

  20. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  1. Series Information File for the 2017 TIGER/Line Shapefile, Current Elementary School Districts State-based

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  2. TIGER/Line Shapefile, 2013, Series Information File for the Current Census Tract State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  3. TIGER/Line Shapefile, 2013, Series Information File for the Current County Subdivision State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  4. Distributing File-Based Data to Remote Sites Within the BABAR Collaboration

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    BABAR [1] uses two formats for its data: Objectivity database and root [2] files. This poster concerns the distribution of the latter--for Objectivity data see [3]. The BABAR analysis data is stored in root files--one per physics run and analysis selection channel--maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,000 root files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centers throughout the us and Europe. Two basic problems confront us when we seek to import bulk data from slac to an institute's local storage via the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and we must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync [4], the widely-used mirror/synchronization program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimize the network transfer by using multiple streams, adjusting the tcp window size, or separating encrypted authentication from unencrypted data channels

  5. Distributing file-based data to remote sites within the BABAR collaboration

    International Nuclear Information System (INIS)

    Adye, T.; Dorigo, A.; Forti, A.; Leonardi, E.

    2001-01-01

    BABAR uses two formats for its data: Objectivity database and ROOT files. This poster concerns the distribution of the latter--for Objectivity data see. The BABAR analysis data is stored in ROOT files--one per physics run and analysis selection channel-maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,00- ROOT files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centres throughout the US and Europe. Two basic problems confront us when we seek to import bulk data from SLAC to an institute's local storage via the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and the authors must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync, the widely-used mirror/synchronisation program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimise the network transfer by using multiple streams, adjusting the TCP window size, or separating encrypted authentication from unencrypted data channels

  6. PR-EDB: Power Reactor Embrittlement Database Version 3

    International Nuclear Information System (INIS)

    Wang, Jy-An John; Subramani, Ranjit

    2008-01-01

    The aging and degradation of light-water reactor pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel materials depends on many factors, such as neutron fluence, flux, and energy spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Large amounts of data from surveillance capsules are needed to develop a generally applicable damage prediction model that can be used for industry standards and regulatory guides. Furthermore, the investigations of regulatory issues such as vessel integrity over plant life, vessel failure, and sufficiency of current codes, Standard Review Plans (SRPs), and Guides for license renewal can be greatly expedited by the use of a well-designed computerized database. The Power Reactor Embrittlement Database (PR-EDB) is such a comprehensive collection of data for U.S. designed commercial nuclear reactors. The current version of the PR-EDB lists the test results of 104 heat-affected-zone (HAZ) materials, 115 weld materials, and 141 base materials, including 103 plates, 35 forgings, and 3 correlation monitor materials that were irradiated in 321 capsules from 106 commercial power reactors. The data files are given in dBASE format and can be accessed with any personal computer using the Windows operating system. 'User-friendly' utility programs have been written to investigate radiation embrittlement using this database. Utility programs allow the user to retrieve, select and manipulate specific data, display data to the screen or printer, and fit and plot Charpy impact data. The PR-EDB Version 3.0 upgrades Version 2.0. The package was developed based on the Microsoft .NET framework technology and uses Microsoft Access for

  7. PR-EDB: Power Reactor Embrittlement Database - Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [ORNL; Subramani, Ranjit [ORNL

    2008-03-01

    The aging and degradation of light-water reactor pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel materials depends on many factors, such as neutron fluence, flux, and energy spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Large amounts of data from surveillance capsules are needed to develop a generally applicable damage prediction model that can be used for industry standards and regulatory guides. Furthermore, the investigations of regulatory issues such as vessel integrity over plant life, vessel failure, and sufficiency of current codes, Standard Review Plans (SRPs), and Guides for license renewal can be greatly expedited by the use of a well-designed computerized database. The Power Reactor Embrittlement Database (PR-EDB) is such a comprehensive collection of data for U.S. designed commercial nuclear reactors. The current version of the PR-EDB lists the test results of 104 heat-affected-zone (HAZ) materials, 115 weld materials, and 141 base materials, including 103 plates, 35 forgings, and 3 correlation monitor materials that were irradiated in 321 capsules from 106 commercial power reactors. The data files are given in dBASE format and can be accessed with any personal computer using the Windows operating system. "User-friendly" utility programs have been written to investigate radiation embrittlement using this database. Utility programs allow the user to retrieve, select and manipulate specific data, display data to the screen or printer, and fit and plot Charpy impact data. The PR-EDB Version 3.0 upgrades Version 2.0. The package was developed based on the Microsoft .NET framework technology and uses Microsoft Access for

  8. Efficient Similarity Search Using the Earth Mover's Distance for Large Multimedia Databases

    DEFF Research Database (Denmark)

    Assent, Ira; Wichterich, Marc; Meisen, Tobias

    2008-01-01

    Multimedia similarity search in large databases requires efficient query processing. The Earth mover's distance, introduced in computer vision, is successfully used as a similarity model in a number of small-scale applications. Its computational complexity hindered its adoption in large multimedia...... databases. We enable directly indexing the Earth mover's distance in structures such as the R-tree and the VA-file by providing the accurate 'MinDist' function to any bounding rectangle in the index. We exploit the computational structure of the new MinDist to derive a new lower bound for the EMD Min...

  9. SNPpy--database management for SNP data from genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Faheem Mitha

    Full Text Available BACKGROUND: We describe SNPpy, a hybrid script database system using the Python SQLAlchemy library coupled with the PostgreSQL database to manage genotype data from Genome-Wide Association Studies (GWAS. This system makes it possible to merge study data with HapMap data and merge across studies for meta-analyses, including data filtering based on the values of phenotype and Single-Nucleotide Polymorphism (SNP data. SNPpy and its dependencies are open source software. RESULTS: The current version of SNPpy offers utility functions to import genotype and annotation data from two commercial platforms. We use these to import data from two GWAS studies and the HapMap Project. We then export these individual datasets to standard data format files that can be imported into statistical software for downstream analyses. CONCLUSIONS: By leveraging the power of relational databases, SNPpy offers integrated management and manipulation of genotype and phenotype data from GWAS studies. The analysis of these studies requires merging across GWAS datasets as well as patient and marker selection. To this end, SNPpy enables the user to filter the data and output the results as standardized GWAS file formats. It does low level and flexible data validation, including validation of patient data. SNPpy is a practical and extensible solution for investigators who seek to deploy central management of their GWAS data.

  10. Diffusivity database (DDB) system for major rocks (Version of 2006/specification and CD-ROM)

    International Nuclear Information System (INIS)

    Tochigi, Yoshikatsu; Sasamoto, Hirosi; Shibata, Masahiro; Sato, Haruo; Yui, Mikazu

    2006-03-01

    The development of the database system has been started to manage with the generally used. The database system has been constructed based on datasheets of the effective diffusion coefficient of the nuclides in the rock matrix in order to be applied on the 'H12: Project to Establish the Scientific and Technical Basis for HLW Disposal in Japan'. In this document, the examination and expansion of the datasheet structure and the process of construction of the database system and conversion of all data existing on datasheets are described. As the first step of the development of the database, this database system and its data will continue to be updated and the interface will be revised to improve the availability. The developed database system is attached on the CD-ROM as the file format of Microsoft Access. (author)

  11. Five hydrologic and landscape databases for selected National Wildlife Refuges in the Southeastern United States

    Science.gov (United States)

    Buell, Gary R.; Gurley, Laura N.; Calhoun, Daniel L.; Hunt, Alexandria M.

    2017-06-12

    This report serves as metadata and a user guide for five out of six hydrologic and landscape databases developed by the U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, to describe data-collection, data-reduction, and data-analysis methods used to construct the databases and provides statistical and graphical descriptions of the databases. Six hydrologic and landscape databases were developed: (1) the Cache River and White River National Wildlife Refuges (NWRs) and contributing watersheds in Arkansas, Missouri, and Oklahoma, (2) the Cahaba River NWR and contributing watersheds in Alabama, (3) the Caloosahatchee and J.N. “Ding” Darling NWRs and contributing watersheds in Florida, (4) the Clarks River NWR and contributing watersheds in Kentucky, Tennessee, and Mississippi, (5) the Lower Suwannee NWR and contributing watersheds in Georgia and Florida, and (6) the Okefenokee NWR and contributing watersheds in Georgia and Florida. Each database is composed of a set of ASCII files, Microsoft Access files, and Microsoft Excel files. The databases were developed as an assessment and evaluation tool for use in examining NWR-specific hydrologic patterns and trends as related to water availability and water quality for NWR ecosystems, habitats, and target species. The databases include hydrologic time-series data, summary statistics on landscape and hydrologic time-series data, and hydroecological metrics that can be used to assess NWR hydrologic conditions and the availability of aquatic and riparian habitat. Landscape data that describe the NWR physiographic setting and the locations of hydrologic data-collection stations were compiled and mapped. Categories of landscape data include land cover, soil hydrologic characteristics, physiographic features, geographic and hydrographic boundaries, hydrographic features, and regional runoff estimates. The geographic extent of each database covers an area within which human activities, climatic

  12. Obstetrical ultrasound data-base management system by using personal computer

    International Nuclear Information System (INIS)

    Jeon, Hae Jeong; Park, Jeong Hee; Kim, Soo Nyung

    1993-01-01

    A computer program which performs obstetric calculations on Clipper Language using the data from ultrasonography was developed for personal computer. It was designed for fast assessment of fetal development, prediction of gestational age, and weight from ultrasonographic measurements which included biparietal diameter, femur length, gestational sac, occipito-frontal diameter, abdominal diameter, and etc. The Obstetrical-Ultrasound Data-Base Management System was tested for its performance. The Obstetrical-Ultrasound Data-Base Management System was very useful in patient management with its convenient data filing, easy retrieval of previous report, prompt but accurate estimation of fetal growth and skeletal anomaly and production of equation and growth curve for pregnant women

  13. File Detection On Network Traffic Using Approximate Matching

    Directory of Open Access Journals (Sweden)

    Frank Breitinger

    2014-09-01

    Full Text Available In recent years, Internet technologies changed enormously and allow faster Internet connections, higher data rates and mobile usage. Hence, it is possible to send huge amounts of data / files easily which is often used by insiders or attackers to steal intellectual property. As a consequence, data leakage prevention systems (DLPS have been developed which analyze network traffic and alert in case of a data leak. Although the overall concepts of the detection techniques are known, the systems are mostly closed and commercial.Within this paper we present a new technique for network trac analysis based on approximate matching (a.k.a fuzzy hashing which is very common in digital forensics to correlate similar files. This paper demonstrates how to optimize and apply them on single network packets. Our contribution is a straightforward concept which does not need a comprehensive conguration: hash the file and store the digest in the database. Within our experiments we obtained false positive rates between 10-4 and 10-5 and an algorithm throughput of over 650 Mbit/s.

  14. libChEBI: an API for accessing the ChEBI database.

    Science.gov (United States)

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  15. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  16. pSort search result - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...name: kome_psort_search_result.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_psort_searc...abase Description Download License Update History of This Database Site Policy | Contact Us pSort search result - KOME | LSDB Archive ...

  17. Virtual file system on NoSQL for processing high volumes of HL7 messages.

    Science.gov (United States)

    Kimura, Eizen; Ishihara, Ken

    2015-01-01

    The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.

  18. Building Parts Inventory Files Using the AppleWorks Data Base Subprogram and Apple IIe or GS Computers.

    Science.gov (United States)

    Schlenker, Richard M.

    This manual is a "how to" training device for building database files using the AppleWorks program with an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 25 figures depicting the computer screen at the various stages of the database file…

  19. Data management in the TJ-II multi-layer database

    International Nuclear Information System (INIS)

    Vega, J.; Cremy, C.; Sanchez, E.; Portas, A.; Fabregas, J.A.; Herrera, R.

    2000-01-01

    The handling of TJ-II experimental data is performed by means of several software modules. These modules provide the resources for data capture, data storage and management, data access as well as general-purpose data visualisation. Here we describe the module related to data storage and management. We begin by introducing the categories in which data can be classified. Then, we describe the TJ-II data flow through the several file systems involved, before discussing the architecture of the TJ-II database. We review the concept of the 'discharge file' and identify the drawbacks that would result from a direct application of this idea to the TJ-II data. In order to overcome these drawbacks, we propose alternatives based on our concepts of signal family, user work-group and data priority. Finally, we present a model for signal storage. This model is in accordance with the database architecture and provides a proper framework for managing the TJ-II experimental data. In the model, the information is organised in layers and is distributed according to the generality of the information, from the common fields of all signals (first layer), passing through the specific records of signal families (second layer) and reaching the particular information of individual signals (third layer)

  20. SKPDB: a structural database of shikimate pathway enzymes

    Directory of Open Access Journals (Sweden)

    de Azevedo Walter F

    2010-01-01

    Full Text Available Abstract Background The functional and structural characterisation of enzymes that belong to microbial metabolic pathways is very important for structure-based drug design. The main interest in studying shikimate pathway enzymes involves the fact that they are essential for bacteria but do not occur in humans, making them selective targets for design of drugs that do not directly impact humans. Description The ShiKimate Pathway DataBase (SKPDB is a relational database applied to the study of shikimate pathway enzymes in microorganisms and plants. The current database is updated regularly with the addition of new data; there are currently 8902 enzymes of the shikimate pathway from different sources. The database contains extensive information on each enzyme, including detailed descriptions about sequence, references, and structural and functional studies. All files (primary sequence, atomic coordinates and quality scores are available for downloading. The modeled structures can be viewed using the Jmol program. Conclusions The SKPDB provides a large number of structural models to be used in docking simulations, virtual screening initiatives and drug design. It is freely accessible at http://lsbzix.rc.unesp.br/skpdb/.

  1. Analyzing GAIAN Database (GaianDB) on a Tactical Network

    Science.gov (United States)

    2015-11-30

    databases, and other files, and exposes them as 1 unified structured query language ( SQL )-compliant data source. This “store locally query anywhere...UDP server that could communicate directly with the CSRs via the CSR’s serial port. However, GAIAN has over 800,000 lines of source code. It...management, by which all would have to be modified to communicate with our server and maintain utility. Not only did we quickly realize that this

  2. TIGER/Line Shapefile, 2013, Series Information File for the 113th Congressional District Nation-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  3. Series Information File for the 2017 TIGER/Line Shapefile, Current Unified School Districts Shapefile State-based

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  4. TIGER/Line Shapefile, 2016, Series Information for the Address Range-Feature Name County-based Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  5. TIGER/Line Shapefile, 2013, Series Information File for theCurrent Elementary School Districts State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  6. Series Information File for the 2017 TIGER/Line Shapefile, Current Secondary School Districts Shapefile State-based

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  7. Series Information File for the 2015 TIGER/Line Shapefile, Current Elementary School Districts State-based Shapefiles

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  8. TIGER/Line Shapefile, 2016, Series Information for the Topological Faces-Area Hydrography County-based Relationship File

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  9. TIGER/Line Shapefile, 2013, Series Information File for the Primary and Secondary Roads State-based Shapefiles

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  10. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked &apos

  11. Waste Tank Vapor Project: Tank vapor database development

    International Nuclear Information System (INIS)

    Seesing, P.R.; Birn, M.B.; Manke, K.L.

    1994-09-01

    The objective of the Tank Vapor Database (TVD) Development task in FY 1994 was to create a database to store, retrieve, and analyze data collected from the vapor phase of Hanford waste tanks. The data needed to be accessible over the Hanford Local Area Network to users at both Westinghouse Hanford Company (WHC) and Pacific Northwest Laboratory (PNL). The data were restricted to results published in cleared reports from the laboratories analyzing vapor samples. Emphasis was placed on ease of access and flexibility of data formatting and reporting mechanisms. Because of time and budget constraints, a Rapid Application Development strategy was adopted by the database development team. An extensive data modeling exercise was conducted to determine the scope of information contained in the database. a A SUN Sparcstation 1000 was procured as the database file server. A multi-user relational database management system, Sybase reg-sign, was chosen to provide the basic data storage and retrieval capabilities. Two packages were chosen for the user interface to the database: DataPrism reg-sign and Business Objects trademark. A prototype database was constructed to provide the Waste Tank Vapor Project's Toxicology task with summarized and detailed information presented at Vapor Conference 4 by WHC, PNL, Oak Ridge National Laboratory, and Oregon Graduate Institute. The prototype was used to develop a list of reported compounds, and the range of values for compounds reported by the analytical laboratories using different sample containers and analysis methodologies. The prototype allowed a panel of toxicology experts to identify carcinogens and compounds whose concentrations were within the reach of regulatory limits. The database and user documentation was made available for general access in September 1994

  12. Automated knowledge base development from CAD/CAE databases

    Science.gov (United States)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  13. Graph of growth data - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis Graph of growth data Data ...detail Data name Graph of growth data DOI 10.18908/lsdba.nbdc00945-003 Description of data contents The grap...h of chronological changes in root, coleoptile, the first leaf, and the second leaf. Data file File name: growth..._data_graph.zip File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/data/growth...e Update History of This Database Site Policy | Contact Us Graph of growth data -

  14. Image files of planarians analyzed by in situ hybridication and immunohistochemical staining - Plabrain DB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Plabrain DB Image files of planarians analyzed by in situ hybridication and immunohistochemical... staining Data detail Data name Image files of planarians analyzed by in situ hybridication and immunohistochemical...sion patterns by whole-mount in situ hybridication and also protein distribution by immunohistochemical...Images are displayed in A list of image files of planarians analyzed by in situ hybridication and immunohistochemical...le search URL - Data acquisition method Whole-mount in situ hybridication, immunohistochemical staining Data

  15. Accessing files in an Internet: The Jade file system

    Science.gov (United States)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  16. Accessing files in an internet - The Jade file system

    Science.gov (United States)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  17. Studies on preparation of the database system for clinical records of atomic bomb survivors

    International Nuclear Information System (INIS)

    Nakamura, Tsuyoshi

    1981-01-01

    Construction of the database system aimed at multipurpose application of data on clinical medicine was studied through the preparation of database system for clinical records of atomic bomb survivors. The present database includes the data about 110,000 atomic bomb survivors in Nagasaki City. This study detailed: (1) Analysis of errors occurring in a period from generation of data in the clinical field to input into the database, and discovery of a highly precise, effective method of input. (2) Development of a multipurpose program for uniform processing of data on physical examinations from many organizations. (3) Development of a record linkage method for voluminous files which are essential in the construction of a large-scale medical information system. (4) A database model suitable for clinical research and a method for designing a segment suitable for physical examination data. (Chiba, N.)

  18. Behov for national database ved operation for lumbal spondylodese

    DEFF Research Database (Denmark)

    Rasmussen, Sten; Iversen, Maria Gerding; Kehlet, Henrik

    2010-01-01

    in 2006 was used. RESULTS: There was no difference in patient demographics and diagnosis between public and private clinics. In 62% of the patient files, information was lacking. Considerations on indication and surgery did not differ from public to private clinics. A standard preoperative rehabilitation...... program was performed in 59% of the cases. Combined anterior and posterior fusion was performed in 37 cases, posterior instrumented fusion in 77 cases and posterior uninstrumented fusion in 105 cases, interspinous spacer was used in six cases and disc arthroplasty in 13 cases. CONCLUSION: Adequate...... evaluation of indication and choice of surgical technique in lumbar fusion based on patient files was not possible. We found no qualitative differences between public and private clinics. A national database is needed to monitor indication and choice of operative procedure. Udgivelsesdato: 2010-Nov-22...

  19. DABAM: an open-source database of X-ray mirrors metrology

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, Manuel, E-mail: srio@esrf.eu [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Bianchi, Davide [AC2T Research GmbH, Viktro-Kaplan-Strasse 2-C, 2700 Wiener Neustadt (Austria); Cocco, Daniele [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Glass, Mark [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Idir, Mourad [NSLS II, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Metz, Jim [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Raimondi, Lorenzo; Rebuffi, Luca [Elettra-Sincrotrone Trieste SCpA, Basovizza (TS) (Italy); Reininger, Ruben; Shi, Xianbo [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Siewert, Frank [BESSY II, Helmholtz Zentrum Berlin, Institute for Nanometre Optics and Technology, Albert-Einstein-Strasse 15, 12489 Berlin (Germany); Spielmann-Jaeggi, Sibylle [Swiss Light Source at Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Takacs, Peter [Instrumentation Division, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Tomasset, Muriel [Synchrotron Soleil (France); Tonnessen, Tom [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Vivo, Amparo [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Yashchuk, Valeriy [Advanced Light Source, Lawrence Berkeley National Laboratory, MS 15-R0317, 1 Cyclotron Road, Berkeley, CA 94720-8199 (United States)

    2016-04-20

    DABAM, an open-source database of X-ray mirrors metrology to be used with ray-tracing and wave-propagation codes for simulating the effect of the surface errors on the performance of a synchrotron radiation beamline. An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.

  20. Universal file processing program for field programmable integrated circuits

    International Nuclear Information System (INIS)

    Freytag, D.R.; Nelson, D.J.

    1985-01-01

    A computer program is presented that translates logic equations into promburner files (or the reverse) for programmable logic devices of various kinds, namely PROMs FPLAs, FPLSs and PALs. The program achieves flexibility through the use of a database containing detailed information about the devices to be programmed. New devices can thus be accommodated through simple extensions of the database. When writing logic equations, the user can define logic combinations of signals as new logic variables for use in subsequent equations. This procedure yields compact and transparent expressions for logic operations, thus reducing the chances for error. A logic simulation program is also provided so that an independent check of the design can be performed at the software level

  1. The CERN accelerator measurement database: on the road to federation

    International Nuclear Information System (INIS)

    Roderick, C.; Billen, R.; Gourber-Pace, M.; Hoibian, N.; Peryt, M.

    2012-01-01

    The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change/extension, is required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was fully centralized in the Measurement database itself, reducing significantly the complexity and the actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution. (authors)

  2. TIGER/Line Shapefile, 2013, Series Information File for the Current Unified School Districts Shapefile State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  3. TIGER/Line Shapefile, 2013, Series Information File for the Current Secondary School Districts Shapefile State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  4. TIGER/Line Shapefile, 2012, Series Information File for the Current Topological Faces (Polygons With All Geocodes) Shapefiles

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  5. TIGER/Line Shapefile, 2014, Series Information File for the Current Subbarrio (Subminor Civil Division) State-based Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  6. Dynamic Non-Hierarchical File Systems for Exascale Storage

    Energy Technology Data Exchange (ETDEWEB)

    Long, Darrell E. [Univ. of California, Santa Cruz, CA (United States); Miller, Ethan L [Univ. of California, Santa Cruz, CA (United States)

    2015-02-24

    appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.

  7. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Directory of Open Access Journals (Sweden)

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  8. Self-aligning and compressed autosophy video databases

    Science.gov (United States)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  9. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  10. Portable database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig

  11. Portable database driven control system for SPEAR

    Energy Technology Data Exchange (ETDEWEB)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig.

  12. Toxic Substances Control Act test submissions database (TSCATS) - comprehensive update. Data file

    International Nuclear Information System (INIS)

    1993-01-01

    The Toxic Substances Control Act Test Submissions Database (TSCATS) was developed to make unpublished test data available to the public. The test data is submitted to the U.S. Environmental Protection Agency by industry under the Toxic Substances Control Act. Test is broadly defined to include case reports, episodic incidents, such as spills, and formal test study presentations. The database allows searching of test submissions according to specific chemical identity or type of study when used with an appropriate search retrieval software program. Studies are indexed under three broad subject areas: health effects, environmental effects and environmental fate. Additional controlled vocabulary terms are assigned which describe the experimental protocol and test observations. Records identify reference information needed to locate the source document, as well as the submitting organization and reason for submission of the test data

  13. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    Science.gov (United States)

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  14. Status of PGAA database compilation and dissemination tools

    International Nuclear Information System (INIS)

    Firestone, Richard B.

    2001-01-01

    We are continuing the development of a comprehensive PGAA database at the Lawrence Berkeley National Laboratory. Isotopic data from the Evaluated Nuclear Structure Data File (ENSDF) are being combined with elemental data measured at the Budapest Reactor to develop a comprehensive database of gamma-ray energies, cross-section yields, and k factors. The more intense Budapest gamma rays for all elements have now been assigned to their associated isotopes on the basis of comparison with ENSDF. For the elements with atomic numbers Z=1-20, the ENSDF and Budapest datasets have been combined to create adopted PGAA gamma-ray datasets. These adopted datasets are typically sufficiently complete to determine the total thermal neutron cross section from the level scheme intensity balance. Software for dissemination of the PGAA data has been developed in collaboration with visiting students from EVITech, Finland

  15. Experience with ATLAS MySQL PanDA database service

    International Nuclear Information System (INIS)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D; De, K; Ozturk, N

    2010-01-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  16. Experience with ATLAS MySQL PanDA database service

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D [Physics Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); De, K; Ozturk, N [Department of Physics, University of Texas at Arlington, Arlington, TX, 76019 (United States)

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  17. Geo-scientific database for research and development purposes

    International Nuclear Information System (INIS)

    Tabani, P.; Mangeot, A.; Crabol, V.; Delage, P.; Dewonck, S.; Auriere, C.

    2012-01-01

    , fruit of a continuous computer development over the past ten years, can store several hundreds of million data. The GEO database (geo-scientific database) is a tool developed by Andra since 1992, in order to group in a secured computer form all data related to the acquisition of in situ and laboratory measurements made on solid and fluid samples as well as observations related to environment. This database has three main functions: - Acquisition and management of data and computer files related to geological, geomechanical, hydrogeological and geochemical measurements on solid and fluid samples and in situ measurements (logging, on sample measurements, geological logs, etc.) as well as observations on fauna and flora. - Available consultation by the staff on Andra's intranet network for selective viewing of data linked to a borehole, a sample or a watch point and for making computations and graphs on sets of laboratory measurements related to a sample. - Physical management of fluid and solid samples stored in a 'core library' in order to localize a sample, follow-up its movement out of the 'core library' to an organization, and carry out regular inventories. Three geo-scientific software are linked to the Geo database: - Geosciences portal: it's a web Intranet application accessible from the ANDRA network. This application is used for the acquisition of hydrogeological and geochemical data collected on the field and on fluid samples, observations related to environmental monitoring, as well as data related to scientific work carried out at surface level or in drifts. - GESTECH application is a software used to integrate geomechanical and geological data collected on solid samples in the GEO database. - INTEGRAT application is a software application automatically integrates data files in the GEO database. For the sake of traceability and efficiency, references of the fluid and solid samples, of the containers (crates, cells, etc.) and storage zones of the 'core library

  18. Occupant evaluation of commercial office lighting: Volume 3, Data archive and database management system

    Energy Technology Data Exchange (ETDEWEB)

    Gillette, G.; Brown, M. (ed.)

    1987-08-01

    This report documents a database of measured lighting environmental data. The database contains four different types of data on more than 1000 occupied work stations: (1) subjective data on attitudes and ratings of selected lighting and other characteristics, (2) photometric and other direct environmental data, including illuminances, luminances, and contrast conditions, (3) indirect environmental measures obtained from the architectural drawings and the work station photographs, and (4) descriptive characteristics of the occupants. The work stations were sampled from thirteen office buildings located in various cities in the United States. In the database, each record contains data on a single work station with its individual fields comprising characteristics of that work station and its occupant. The relational database runs on an IBM or IBM compatible personal computer using commercially available software. As a supplement to the database, an independent ASCII-8 bit data file is available.

  19. Long term file migration. Part I: file reference patterns

    International Nuclear Information System (INIS)

    Smith, A.J.

    1978-08-01

    In most large computer installations, files are moved between on-line disk and mass storage (tape, integrated mass storage device) either automatically by the system or specifically at the direction of the user. This is the first of two papers which study the selection of algorithms for the automatic migration of files between mass storage and disk. The use of the text editor data sets at the Stanford Linear Accelerator Center (SLAC) computer installation is examined through the analysis of thirteen months of file reference data. Most files are used very few times. Of those that are used sufficiently frequently that their reference patterns may be examined, about a third show declining rates of reference during their lifetime; of the remainder, very few (about 5%) show correlated interreference intervals, and interreference intervals (in days) appear to be more skewed than would occur with the Bernoulli process. Thus, about two-thirds of all sufficiently active files appear to be referenced as a renewal process with a skewed interreference distribution. A large number of other file reference statistics (file lifetimes, interference distributions, moments, means, number of uses/file, file sizes, file rates of reference, etc.) are computed and presented. The results are applied in the following paper to the development and comparative evaluation of file migration algorithms. 17 figures, 13 tables

  20. Radiological reporting system developed with FileMakerPro. Cooperation with HIS, RIS, and PACS

    International Nuclear Information System (INIS)

    Kawakami, Satoshi

    2004-01-01

    This article briefly describes our original radiological reporting system. This system was developed with the widely used database software FileMakerPro (ver 5.5). The reporting system can obtain information about patients and examinations from a radiology information system (RIS) by the Open DataBase Connectivity (ODBC) technique. By clicking the button on the reporting system, the corresponding Digital Imaging and Communications in Medicine (DICOM) images can be displayed on a picture archiving and communication system (PACS) workstation monitor. Reference images in JPEG format can be easily moved from PACS to the reporting system. Reports produced by the reporting system are distributed to the hospital information system (HIS) in Portable Document Format (PDF), through another web server. By utilizing the capacity of FileMakerPro, the human-machine interface of the system has been able to be improved easily. In addition, cooperation with HIS, RIS, and PACS could be constructed. Therefore, this original system would contribute to increasing the efficiency of radiological diagnosis. (author)

  1. E-FUSRAP: AUTOMATING THE CASE FILE FOR THE FORMERLY UTILIZED SITES REMEDIAL ACTION PROGRAM

    International Nuclear Information System (INIS)

    Mackenzie, D.; Marshall, K.

    2003-01-01

    The Department of Energy's (DOE) Office of Site Closure, EM-30, houses the document library pertaining to sites that are related to the Formerly Utilized Sites Remedial Action Program (FUSRAP) and regularly addresses ongoing information demands, primarily from Freedom of Information Act (FOIA) requests, interested members of the public, the DOE, and other Federal Agencies. To address these demands more efficiently, DOE has begun to implement a new multi-phase, information management process known as e-FUSRAP. The first phase of e-FUSRAP, the development of the Considered Sites Database, summarizes and allows public access to complex information on over 600 sites considered as candidates for FUSRAP. The second phase of e-FUSRAP, the development of the Document Indexing Database, will create an internal index of more than 10,000 documents in the FUSRAP library's case file, allowing more effective management and retrieval of case file documents. Together, the phases of e-FUSRAP will allow EM-30 to become an innovative leader in enhancing public information sources

  2. FDA toxicity databases and real-time data entry

    International Nuclear Information System (INIS)

    Arvidson, Kirk B.

    2008-01-01

    Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributed in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared

  3. A Solution on Identification and Rearing Files Insmallhold Pig Farming

    Science.gov (United States)

    Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang

    In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.

  4. Musculoskeletal disorder costs and medical claim filing in the US retail trade sector.

    Science.gov (United States)

    Bhattacharya, Anasua; Leigh, J Paul

    2011-01-01

    The average costs of Musculoskeletal Disorder (MSD) and odds ratios for filing medical claims related to MSD were examined. The medical claims were identified by ICD 9 codes for four US Census regions within retail trade. Large private firms' medical claims data from Thomson Reuters Inc. MarketScan databases for the years 2003 through 2006 were used. Average costs were highest for claims related to lumbar region (ICD 9 Code: 724.02) and number of claims were largest for low back syndrome (ICD 9 Code: 724.2). Whereas the odds of filing an MSD claim did not vary greatly over time, average costs declined over time. The odds of filing claims rose with age and were higher for females and southerners than men and non-southerners. Total estimated national medical costs for MSDs within retail trade were $389 million (2007 USD).

  5. Automation of ORIGEN2 calculations for the transuranic waste baseline inventory database using a pre-processor and a post-processor

    International Nuclear Information System (INIS)

    Liscum-Powell, J.

    1997-06-01

    The purpose of the work described in this report was to automate ORIGEN2 calculations for the Waste Isolation Pilot Plant (WIPP) Transuranic Waste Baseline Inventory Database (WTWBID); this was done by developing a pre-processor to generate ORIGEN2 input files from WWBID inventory files and a post-processor to remove excess information from the ORIGEN2 output files. The calculations performed with ORIGEN2 estimate the radioactive decay and buildup of various radionuclides in the waste streams identified in the WTWBID. The resulting radionuclide inventories are needed for performance assessment calculations for the WIPP site. The work resulted in the development of PreORG, which requires interaction with the user to generate ORIGEN2 input files on a site-by-site basis, and PostORG, which processes ORIGEN2 output into more manageable files. Both programs are written in the FORTRAN 77 computer language. After running PreORG, the user will run ORIGEN2 to generate the desired data; upon completion of ORIGEN2 calculations, the user can run PostORG to process the output to make it more manageable. All the programs run on a 386 PC or higher with a math co-processor or a computer platform running under VMS operating system. The pre- and post-processors for ORIGEN2 were generated for use with Rev. 1 data of the WTWBID and can also be used with Rev. 2 and 3 data of the TWBID (Transuranic Waste Baseline Inventory Database)

  6. The Open Spectral Database: an open platform for sharing and searching spectral data.

    Science.gov (United States)

    Chalk, Stuart J

    2016-01-01

    A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.

  7. Search across Different Media: Numeric Data Sets and Text Files

    Directory of Open Access Journals (Sweden)

    Michael Buckland

    2006-12-01

    Full Text Available Digital technology encourages the hope of searching across and between different media forms (text, sound, image, numeric data. Topic searches are described in two different media: text files and socioeconomic numeric databases and also for transverse searching, whereby retrieved text is used to find topically related numeric data and vice versa. Direct transverse searching across different media is impossible. Descriptive metadata provide enabling infrastructure, but usually require mappings between different vocabularies and a search-term recommender system. Statistical association techniques and natural-language processing can help. Searches in socioeconomic numeric databases ordinarily require that place and time be specified.

  8. Endnote Referencing Software: Importing references from an Ebsco database, attaching full text, organising your Endnote library

    OpenAIRE

    Turner, Susan

    2017-01-01

    This video demonstrates importing bibliographic references from EBSCO Discovery Service, the same method can be used for all EBSCO databases. \\ud The video also demonstrates how to attach full text files to the references and how to organise your references within the endnote library using groups.

  9. JENDL special purpose file

    International Nuclear Information System (INIS)

    Nakagawa, Tsuneo

    1995-01-01

    In JENDL-3,2, the data on all the reactions having significant cross section over the neutron energy from 0.01 meV to 20 MeV are given for 340 nuclides. The object range of application extends widely, such as the neutron engineering, shield and others of fast reactors, thermal neutron reactors and nuclear fusion reactors. This is a general purpose data file. On the contrary to this, the file in which only the data required for a specific application field are collected is called special purpose file. The file for dosimetry is a typical special purpose file. The Nuclear Data Center, Japan Atomic Energy Research Institute, is making ten kinds of JENDL special purpose files. The files, of which the working groups of Sigma Committee are in charge, are listed. As to the format of the files, ENDF format is used similarly to JENDL-3,2. Dosimetry file, activation cross section file, (α, n) reaction data file, fusion file, actinoid file, high energy data file, photonuclear data file, PKA/KERMA file, gas production cross section file and decay data file are described on their contents, the course of development and their verification. Dosimetry file and gas production cross section file have been completed already. As for the others, the expected time of completion is shown. When these files are completed, they are opened to the public. (K.I.)

  10. Chapter 51: How to Build a Simple Cone Search Service Using a Local Database

    Science.gov (United States)

    Kent, B. R.; Greene, G. R.

    The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.

  11. A protable Database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-01-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system

  12. 11 CFR 100.19 - File, filed or filing (2 U.S.C. 434(a)).

    Science.gov (United States)

    2010-01-01

    ... a facsimile machine or by electronic mail if the reporting entity is not required to file..., including electronic reporting entities, may use the Commission's website's on-line program to file 48-hour... the reporting entity is not required to file electronically in accordance with 11 CFR 104.18. [67 FR...

  13. Storage of sparse files using parallel log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-11-07

    A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a single patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.

  14. ZZ ELAST2, Database of Cross Sections for the Elastic Scattering of Electrons and Positrons by Atoms

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Historical background and information: This database is an extension of the earlier database, 'Elastic Scattering of Electrons and Positrons by Atoms: Database ELAST', Report NISTIR 5188, 1993. Cross sections for the elastic scattering of electrons and positrons by atoms were calculated at energies from 1 KeV to 100 MeV. Up to 10 MeV the RELEL code of Riley was used. Above 10 MeV the ELSCAT code was used, which calculated the factored cross sections and evaluates the screening factor Kscr in WKB approximation. 2 - Application of the data: This database was developed to provide input for the transport codes, such as ETRAN, and includes differential cross sections, the total cross section, and the transport cross sections. In addition, a code TRANSX is provided that generates transport cross section of arbitrary order needed as input for the calculation of Goudsmit-Saunderson multiple-scattering angular distribution 3 - Source and scope of data: The database includes cross sections at 61 energies for electrons and 41 energies from positrons, covering the energy region from 1 KeV to 100 MeV. The number of deflection angles included in the database is 314 angles. Total and transport cross sections are also included in this package. The data files have an extension (jjj) that represents the atomic number of the target atom. The database includes auxiliary data files that enable the ELASTIC code to include the following optional modifications: (i) the inclusion of the exchange correction for electrons scattering; (ii) the conversion of the cross sections for scattering by free atoms to cross sections for scattering by atoms in solids; (iii) ti reduction of the cross sections at large angles and at high energies when the nucleus is treated as an extended rather than a point charge

  15. File Type Identification of File Fragments using Longest Common Subsequence (LCS)

    Science.gov (United States)

    Rahmat, R. F.; Nicholas, F.; Purnamawati, S.; Sitompul, O. S.

    2017-01-01

    Computer forensic analyst is a person in charge of investigation and evidence tracking. In certain cases, the file needed to be presented as digital evidence was deleted. It is difficult to reconstruct the file, because it often lost its header and cannot be identified while being restored. Therefore, a method is required for identifying the file type of file fragments. In this research, we propose Longest Common Subsequences that consists of three steps, namely training, testing and validation, to identify the file type from file fragments. From all testing results we can conlude that our proposed method works well and achieves 92.91% of accuracy to identify the file type of file fragment for three data types.

  16. 49 CFR 564.5 - Information filing; agency processing of filings.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Information filing; agency processing of filings... HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION REPLACEABLE LIGHT SOURCE INFORMATION (Eff. until 12-01-12) § 564.5 Information filing; agency processing of filings. (a) Each manufacturer...

  17. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  18. Cut-and-Paste file-systems: integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1995-01-01

    We have implemented an integrated and configurable file system called the Pegasus filesystem (PFS) and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-systemalgorithms, PFS is used for on-line file-systemdata storage. Algorithms are first analyzed in

  19. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    Science.gov (United States)

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. De-identifying a public use microdata file from the Canadian national discharge abstract database

    Directory of Open Access Journals (Sweden)

    Paton David

    2011-08-01

    Full Text Available Abstract Background The Canadian Institute for Health Information (CIHI collects hospital discharge abstract data (DAD from Canadian provinces and territories. There are many demands for the disclosure of this data for research and analysis to inform policy making. To expedite the disclosure of data for some of these purposes, the construction of a DAD public use microdata file (PUMF was considered. Such purposes include: confirming some published results, providing broader feedback to CIHI to improve data quality, training students and fellows, providing an easily accessible data set for researchers to prepare for analyses on the full DAD data set, and serve as a large health data set for computer scientists and statisticians to evaluate analysis and data mining techniques. The objective of this study was to measure the probability of re-identification for records in a PUMF, and to de-identify a national DAD PUMF consisting of 10% of records. Methods Plausible attacks on a PUMF were evaluated. Based on these attacks, the 2008-2009 national DAD was de-identified. A new algorithm was developed to minimize the amount of suppression while maximizing the precision of the data. The acceptable threshold for the probability of correct re-identification of a record was set at between 0.04 and 0.05. Information loss was measured in terms of the extent of suppression and entropy. Results Two different PUMF files were produced, one with geographic information, and one with no geographic information but more clinical information. At a threshold of 0.05, the maximum proportion of records with the diagnosis code suppressed was 20%, but these suppressions represented only 8-9% of all values in the DAD. Our suppression algorithm has less information loss than a more traditional approach to suppression. Smaller regions, patients with longer stays, and age groups that are infrequently admitted to hospitals tend to be the ones with the highest rates of suppression

  1. TaxMan: a taxonomic database manager

    Directory of Open Access Journals (Sweden)

    Blaxter Mark

    2006-12-01

    Full Text Available Abstract Background Phylogenetic analysis of large, multiple-gene datasets, assembled from public sequence databases, is rapidly becoming a popular way to approach difficult phylogenetic problems. Supermatrices (concatenated multiple sequence alignments of multiple genes can yield more phylogenetic signal than individual genes. However, manually assembling such datasets for a large taxonomic group is time-consuming and error-prone. Additionally, sequence curation, alignment and assessment of the results of phylogenetic analysis are made particularly difficult by the potential for a given gene in a given species to be unrepresented, or to be represented by multiple or partial sequences. We have developed a software package, TaxMan, that largely automates the processes of sequence acquisition, consensus building, alignment and taxon selection to facilitate this type of phylogenetic study. Results TaxMan uses freely available tools to allow rapid assembly, storage and analysis of large, aligned DNA and protein sequence datasets for user-defined sets of species and genes. The user provides GenBank format files and a list of gene names and synonyms for the loci to analyse. Sequences are extracted from the GenBank files on the basis of annotation and sequence similarity. Consensus sequences are built automatically. Alignment is carried out (where possible, at the protein level and aligned sequences are stored in a database. TaxMan can automatically determine the best subset of taxa to examine phylogeny at a given taxonomic level. By using the stored aligned sequences, large concatenated multiple sequence alignments can be generated rapidly for a subset and output in analysis-ready file formats. Trees resulting from phylogenetic analysis can be stored and compared with a reference taxonomy. Conclusion TaxMan allows rapid automated assembly of a multigene datasets of aligned sequences for large taxonomic groups. By extracting sequences on the basis of

  2. Analysis of Landslide Hazard Impact Using the Landslide Database for Germany

    Science.gov (United States)

    Klose, M.; Damm, B.

    2014-12-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still shows a comprehensive research history in Germany, but only one focused on development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this contribution, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of different case studies in the German Central Uplands. The case study results exemplify database application in analysis of vulnerability to landslides, impact statistics, and hazard or cost modeling.

  3. Map Database for Surficial Materials in the Conterminous United States

    Science.gov (United States)

    Soller, David R.; Reheis, Marith C.; Garrity, Christopher P.; Van Sistine, D. R.

    2009-01-01

    The Earth's bedrock is overlain in many places by a loosely compacted and mostly unconsolidated blanket of sediments in which soils commonly are developed. These sediments generally were eroded from underlying rock, and then were transported and deposited. In places, they exceed 1000 ft (330 m) in thickness. Where the sediment blanket is absent, bedrock is either exposed or has been weathered to produce a residual soil. For the conterminous United States, a map by Soller and Reheis (2004, scale 1:5,000,000; http://pubs.usgs.gov/of/2003/of03-275/) shows these sediments and the weathered, residual material; for ease of discussion, these are referred to as 'surficial materials'. That map was produced as a PDF file, from an Adobe Illustrator-formatted version of the provisional GIS database. The provisional GIS files were further processed without modifying the content of the published map, and are here published.

  4. Rapid storage and retrieval of genomic intervals from a relational database system using nested containment lists.

    Science.gov (United States)

    Wiley, Laura K; Sivley, R Michael; Bush, William S

    2013-01-01

    Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.

  5. Protein - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso...nhibitor of the protein. Data file File name: trypanosome.zip File URL: ftp://ftp....biosciencedbc.jp/archive/trypanosome/LATEST/trypanosome.zip File size: 1.4 KB Simple search URL http://togo...db.biosciencedbc.jp/togodb/view/trypanosome#en Data acquisition method - Data analysis method - Number of da...ndelian inheritance in Man ) map Location of the gene on a chromosome or its chromosome number pdb PDB ID (P

  6. A thermodynamic reference database for nuclear waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Brendler, V. [Helmholtz-Zentrum Dresden-Rossendorf, Dresden (Germany); Altmaier, M. [Karlsruhe Institute of Technology, Karlsruhe (Germany); Moog, H. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH, Braunschweig (Germany); Voigt, W. [TU Bergakademie Freiberg (Germany); Wilhelm, S. [AF Consult Switzerland AG, Baden (Switzerland)

    2015-07-01

    Safety analysis for a geological repository for radioactive waste as well as remediation measures for uranium mining and processing legacies share an essential: the need for a reliable, traceable and accurate assessment of potential migration of toxic constituents into the biosphere. The respective computational codes require site-independent thermodynamic data concerning aqueous speciation, solubility limiting solid phases and ion-interaction parameters. Such databases, however, show several constraints: - Incompleteness in terms of major and trace elements - Inconsistencies between species considered and corresponding formation constants - Restricted variation ranges of intensive parameters (temperature, density, pressure) - Limitations with respect to solution compositions (ionic strength). To overcome these limitations to a significant degree, an ambitious database project - THEREDA - has been launched in 2006 by institutions leading in the field of safety research for nuclear waste disposal in Germany. The main objective is a centrally administrated and maintained database of verified thermodynamic parameters for environmental applications in general and radiochemical issues in particular. During the last year, the most important point was the official release of four more datasets (adding carbonate, An(III), Np(V) and Cs to the hexary system of oceanic salts), all based on the Pitzer model describing the ion-ion interactions. They can all be downloaded as separate files from the project web site www.thereda.de (navigation menu: THEREDA Data Query → Tailored Databases) as generic ASCII type, and in formats specific to the geochemical speciation codes PhreeqC, EQ3/6, ChemApp and Geochemist Workbench. Moreover, access to data records is now also possible through interactive forms (menu: THEREDA Data Query → Single Data Query // Complex Systems), both with export options as CSV or MS Excel file. Additional releases of thermodynamic data for Th(IV), U(IV) and

  7. A thermodynamic reference database for nuclear waste disposal

    International Nuclear Information System (INIS)

    Brendler, V.; Altmaier, M.; Moog, H.; Voigt, W.; Wilhelm, S.

    2015-01-01

    Safety analysis for a geological repository for radioactive waste as well as remediation measures for uranium mining and processing legacies share an essential: the need for a reliable, traceable and accurate assessment of potential migration of toxic constituents into the biosphere. The respective computational codes require site-independent thermodynamic data concerning aqueous speciation, solubility limiting solid phases and ion-interaction parameters. Such databases, however, show several constraints: - Incompleteness in terms of major and trace elements - Inconsistencies between species considered and corresponding formation constants - Restricted variation ranges of intensive parameters (temperature, density, pressure) - Limitations with respect to solution compositions (ionic strength). To overcome these limitations to a significant degree, an ambitious database project - THEREDA - has been launched in 2006 by institutions leading in the field of safety research for nuclear waste disposal in Germany. The main objective is a centrally administrated and maintained database of verified thermodynamic parameters for environmental applications in general and radiochemical issues in particular. During the last year, the most important point was the official release of four more datasets (adding carbonate, An(III), Np(V) and Cs to the hexary system of oceanic salts), all based on the Pitzer model describing the ion-ion interactions. They can all be downloaded as separate files from the project web site www.thereda.de (navigation menu: THEREDA Data Query → Tailored Databases) as generic ASCII type, and in formats specific to the geochemical speciation codes PhreeqC, EQ3/6, ChemApp and Geochemist Workbench. Moreover, access to data records is now also possible through interactive forms (menu: THEREDA Data Query → Single Data Query // Complex Systems), both with export options as CSV or MS Excel file. Additional releases of thermodynamic data for Th(IV), U(IV) and

  8. Use of the Social Security Administration Death Master File for ascertainment of mortality status

    Directory of Open Access Journals (Sweden)

    Whitcomb Brian W

    2004-03-01

    Full Text Available Abstract Objectives Internet sources that use the Social Security Administration's (SSA Death Master File have demonstrated high sensitivity among males for detection of mortality status in comparisons to the National Death Index, but the sensitivity has not been investigated for other demographic groups. Methods The authors used the SSA Death Master File to determine the mortality status of 374 decedents from the ongoing Patient Outcomes Study at Cedars-Sinai Medical Center whose deaths were confirmed by physicians using hospital records. Results Decedents identified by the SSA Death Master File were significantly older than those not identified. Foreign-born decedents were significantly less likely to be identified as dead than American-born decedents. Gender and marital status were not significant factors for identification by the SSA Death Master File. Conclusion The results of this study suggest that Internet sources may be used as an inexpensive and effective tool for determination of mortality status. However, among certain populations use of these databases alone may provide incomplete information.

  9. Cut-and-Paste file-systems : integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.

    1996-01-01

    We have implemented an integrated and configurable file system called the PFS and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-system algorithms, PFS is used for on-line file-system data storage. Algorithms are first analyzed in Patsy and when we are

  10. 'Isotopo' a database application for facile analysis and management of mass isotopomer data.

    Science.gov (United States)

    Ahmed, Zeeshan; Zeeshan, Saman; Huber, Claudia; Hensel, Michael; Schomburg, Dietmar; Münch, Richard; Eylert, Eva; Eisenreich, Wolfgang; Dandekar, Thomas

    2014-01-01

    The composition of stable-isotope labelled isotopologues/isotopomers in metabolic products can be measured by mass spectrometry and supports the analysis of pathways and fluxes. As a prerequisite, the original mass spectra have to be processed, managed and stored to rapidly calculate, analyse and compare isotopomer enrichments to study, for instance, bacterial metabolism in infection. For such applications, we provide here the database application 'Isotopo'. This software package includes (i) a database to store and process isotopomer data, (ii) a parser to upload and translate different data formats for such data and (iii) an improved application to process and convert signal intensities from mass spectra of (13)C-labelled metabolites such as tertbutyldimethylsilyl-derivatives of amino acids. Relative mass intensities and isotopomer distributions are calculated applying a partial least square method with iterative refinement for high precision data. The data output includes formats such as graphs for overall enrichments in amino acids. The package is user-friendly for easy and robust data management of multiple experiments. The 'Isotopo' software is available at the following web link (section Download): http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. The package contains three additional files: software executable setup (installer), one data set file (discussed in this article) and one excel file (which can be used to convert data from excel to '.iso' format). The 'Isotopo' software is compatible only with the Microsoft Windows operating system. http://spp1316.uni-wuerzburg.de/bioinformatics/isotopo/. © The Author(s) 2014. Published by Oxford University Press.

  11. Current Challenges in Development of a Database of Three-Dimensional Chemical Structures

    Science.gov (United States)

    Maeda, Miki H.

    2015-01-01

    We are developing a database named 3DMET, a three-dimensional structure database of natural metabolites. There are two major impediments to the creation of 3D chemical structures from a set of planar structure drawings: the limited accuracy of computer programs and insufficient human resources for manual curation. We have tested some 2D–3D converters to convert 2D structure files from external databases. These automatic conversion processes yielded an excessive number of improper conversions. To ascertain the quality of the conversions, we compared IUPAC Chemical Identifier and canonical SMILES notations before and after conversion. Structures whose notations correspond to each other were regarded as a correct conversion in our present work. We found that chiral inversion is the most serious factor during the improper conversion. In the current stage of our database construction, published books or articles have been resources for additions to our database. Chemicals are usually drawn as pictures on the paper. To save human resources, an optical structure reader was introduced. The program was quite useful but some particular errors were observed during our operation. We hope our trials for producing correct 3D structures will help other developers of chemical programs and curators of chemical databases. PMID:26075200

  12. Development of a quality assurance safety assessment database for near surface radioactive waste disposal

    International Nuclear Information System (INIS)

    Park, J. W.; Kim, C. L.; Park, J. B.; Lee, E. Y.; Lee, Y. M.; Kang, C. H.; Zhou, W.; Kozak, M. W.

    2003-01-01

    A quality assurance safety assessment database, called QUARK (QUality Assurance program for Radioactive waste management in Korea), has been developed to manage both analysis information and parameter database for safety assessment of Low- and Intermediate-Level radioactive Waste (LILW) disposal facility in Korea. QUARK is such a tool that serves QA purposes for managing safety assessment information properly and securely. In QUARK, the information is organized and linked to maximize the integrity of information and traceability. QUARK provides guidance to conduct safety assessment analysis, from scenario generation to result analysis, and provides a window to inspect and trace previous safety assessment analysis and parameter values. QUARK also provides default database for safety assessment staff who construct input data files using SAGE(Safety Assessment Groundwater Evaluation), a safety assessment computer code

  13. E-FUSRAP: AUTOMATING THE CASE FILE FOR THE FORMERLY UTILIZED SITES REMEDIAL ACTION PROGRAM

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, D.; Marshall, K.

    2003-02-27

    The Department of Energy's (DOE) Office of Site Closure, EM-30, houses the document library pertaining to sites that are related to the Formerly Utilized Sites Remedial Action Program (FUSRAP) and regularly addresses ongoing information demands, primarily from Freedom of Information Act (FOIA) requests, interested members of the public, the DOE, and other Federal Agencies. To address these demands more efficiently, DOE has begun to implement a new multi-phase, information management process known as e-FUSRAP. The first phase of e-FUSRAP, the development of the Considered Sites Database, summarizes and allows public access to complex information on over 600 sites considered as candidates for FUSRAP. The second phase of e-FUSRAP, the development of the Document Indexing Database, will create an internal index of more than 10,000 documents in the FUSRAP library's case file, allowing more effective management and retrieval of case file documents. Together, the phases of e-FUSRAP will allow EM-30 to become an innovative leader in enhancing public information sources.

  14. A human friendly reporting and database system for brain PET analysis

    International Nuclear Information System (INIS)

    Jamzad, M.; Ishii, Kenji; Toyama, Hinako; Senda, Michio

    1996-01-01

    We have developed a human friendly reporting and database system for clinical brain PET (Positron Emission Tomography) scans, which enables statistical data analysis on qualitative information obtained from image interpretation. Our system consists of a Brain PET Data (Input) Tool and Report Writing Tool. In the Brain PET Data Tool, findings and interpretations are input by selecting menu icons in a window panel instead of writing a free text. This method of input enables on-line data entry into and update of the database by means of pre-defined consistent words, which facilitates statistical data analysis. The Report Writing Tool generates a one page report of natural English sentences semi-automatically by using the above input information and the patient information obtained from our PET center's main database. It also has a keyword selection function from the report text so that we can save a set of keywords on the database for further analysis. By means of this system, we can store the data related to patient information and visual interpretation of the PET examination while writing clinical reports in daily work. The database files in our system can be accessed by means of commercially available databases. We have used the 4th Dimension database that runs on a Macintosh computer and analyzed 95 cases of 18 F-FDG brain PET studies. The results showed high specificity of parietal hypometabolism for Alzheimer's patients. (author)

  15. CQL: a database in smart card for health care applications.

    Science.gov (United States)

    Paradinas, P C; Dufresnes, E; Vandewalle, J J

    1995-01-01

    The CQL-Card is the first smart card in the world to use Database Management Systems (DBMS) concepts. The CQL-Card is particularly suited to a portable file in health applications where the information is required by many different partners, such as health insurance organizations, emergency services, and General Practitioners. All the information required by these different partners can be shared with independent security mechanisms. Database engine functions are carried out by the card, which manages tables, views, and dictionaries. Medical Information is stored in tables and views are logical and dynamic subsets of tables. For owner-partners like MIS (Medical Information System), it is possible to grant privileges (select, insert, update, and delete on table or view) to other partners. Furthermore, dictionaries are structures that contain requested descriptions and which allow adaptation to computer environments. Health information held in the CQL-Card is accessed using CQL (Card Query Language), a high level database query language which is a subset of the standard SQL (Structured Query Language). With this language, CQL-Card can be easily integrated into Medical Information Systems.

  16. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural

  17. Report from the 6th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Daniel Liwei Wang

    2013-05-01

    Full Text Available Petascale data management and analysis remain one of the main unresolved challenges in today's computing. The 6th Extremely Large Databases workshop was convened alongside the XLDB conference to discuss the challenges in the health care, biology, and natural resources communities. The role of cloud computing, the dominance of file-based solutions in science applications, in-situ and predictive analysis, and commercial software use in academic environments were discussed in depth as well. This paper summarizes the discussions of this workshop.

  18. TIGER/Line Shapefile, 2013, Series Information File for the 2010 Census 5-Digit ZIP Code Tabulation Area (ZCTA5) National Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  19. TIGER/Line Shapefile, 2012, Series Information File for the nation, Current American Indian/Alaska Native/Native Hawaiian Areas (AIANNH) National Shapefile

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  20. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.

    Science.gov (United States)

    Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad

    2010-07-01

    Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.

  1. The landslide database for Germany: Closing the gap at national level

    Science.gov (United States)

    Damm, Bodo; Klose, Martin

    2015-11-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still has a long research history in Germany, but one focussed on the development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present paper reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to the 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this paper, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of three case studies in the German Central Uplands. The case study results exemplify database application in the analysis of landslide frequency and causes, impact statistics, and landslide susceptibility modeling. Using the example of these case studies, strengths and weaknesses of the database are discussed in detail. The paper concludes with a summary of the database project with regard to previous

  2. Concierge: Personal database software for managing digital research resources

    Directory of Open Access Journals (Sweden)

    Hiroyuki Sakai

    2007-11-01

    Full Text Available This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literaturemanagement, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp.

  3. A comparison of different database technologies for the CMS AsyncStageOut transfer database

    Science.gov (United States)

    Ciangottini, D.; Balcas, J.; Mascheroni, M.; Rupeika, E. A.; Vaandering, E.; Riahi, H.; Silva, J. M. D.; Hernandez, J. M.; Belforte, S.; Ivanov, T. T.

    2017-10-01

    AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.

  4. A Comparison of Different Database Technologies for the CMS AsyncStageOut Transfer Database

    Energy Technology Data Exchange (ETDEWEB)

    Ciangottini, D. [INFN, Perugia; Balcas, J. [Caltech; Mascheroni, M. [Fermilab; Rupeika, E. A. [Vilnius U.; Vaandering, E. [Fermilab; Riahi, H. [CERN; Silva, J. M.D. [Sao Paulo, IFT; Hernandez, J. M. [Madrid, CIEMAT; Belforte, S. [INFN, Trieste; Ivanov, T. T. [Sofiya U.

    2017-11-22

    AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.

  5. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  6. The consequences of the Chernobyl accident: REDAC, the radioecological database of the French-German Initiative

    Energy Technology Data Exchange (ETDEWEB)

    Deville-Cavelin, G. [Institut de Radioprotection et de Surete Nucleaire, IRSN, BP 17, 92262 Fontenay-aux-Roses Cedex (France); Biesold, H. [Gesellschaft fuer Anlagen- und Reaktorsicherheit, GRS, mbH, Schwertnergasse 1, 50667 Koeln (Germany); Chabanyuk, V. [Intelligence Systems GEO, Chernobyl Centre for Nuclear Safety, Radioactive Wastes and Radioecology (Ukraine)

    2005-07-01

    methodology made use of the following main portlets and DDB: GlobalFunctions - interconnection between portlets; ContentTree - access to a content of REDAC; LibraryLocator - shows location in the library; Index - search of documents by key words; Search - search of documents according to chosen properties; Favorites - generation of sets of the most often used files; Briefcase - for downloading documents to the user's computer; MetaView - shows meta-data, characterizing the files; DocView - displays the file load from the web server; ProductRelations, ActiveRelations, AllRelations - shows relations between the selected document and other associated documents; Glossary - global project glossary based on thematic ones. The following conclusions are highlighted: REDAC is a powerful and useful radioecological tool: - All elements easily accessible through the original tool, ProSF, developed by IS Geo; - Relations constructed between the documents (files, databases, documentation, reports, etc); - All elements are structured by a meta-information; - Mechanisms of search; - Global radioecological glossary; - Spatial data geo-coded; - Processes, tools and methodology suitable for similar projects; - Data useful for scientific studies, modelling, operational purposes, communication with mass media. As prospects, the addition of functionality, support and maintenance are pointed out as well as a strong integration implying thematic integration (merging of all DB in an unique one) and information integration (decision of 'strong integration' and information support)

  7. Image storage, cataloguing and retrieval using a personal computer database software application

    International Nuclear Information System (INIS)

    Lewis, G.; Howman-Giles, R.

    1999-01-01

    Full text: Interesting images and cases are collected and collated by most nuclear medicine practitioners throughout the world. Changing imaging technology has altered the way in which images may be presented and are reported, with less reliance on 'hard copy' for both reporting and archiving purposes. Digital image generation and storage is rapidly replacing film in both radiological and nuclear medicine practice. A personal computer database based interesting case filing system is described and demonstrated. The digital image storage format allows instant access to both case information (e.g. history and examination, scan report or teaching point) and the relevant images. The database design allows rapid selection of cases and images appropriate to a particular diagnosis, scan type, age or other search criteria. Correlative X-ray, CT, MRI and ultrasound images can also be stored and accessed. The application is in use at The New Children's Hospital as an aid to postgraduate medical education, with new cases being regularly added to the database

  8. The SAMGrid database server component: its upgraded infrastructure and future development path

    International Nuclear Information System (INIS)

    Loebel-Carpenter, L.; White, S.; Baranovski, A.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; Burgon-Lyon, M.; St Denis, R.; Belforte, S.; Kerzel, U.; Bartsch, V.; Leslie, M.

    2004-01-01

    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes required for the unified metadata catalog has warranted a complete redesign of the DB Server. We describe here the architecture and features of the new server. In particular, we discuss the new CORBA infrastructure that utilizes python wrapper classes around IDL structs and exceptions. Such infrastructure allows us to use the same code on both server and client sides, which in turn results in significantly improved code maintainability and easier development. We also discuss future integration of the new server with an SBIR II project which is directed toward allowing the DB Server to access distributed databases, implemented in different DB systems and possibly using different schema

  9. RADARS, a bioinformatics solution that automates proteome mass spectral analysis, optimises protein identification, and archives data in a relational database.

    Science.gov (United States)

    Field, Helen I; Fenyö, David; Beavis, Ronald C

    2002-01-01

    RADARS, a rapid, automated, data archiving and retrieval software system for high-throughput proteomic mass spectral data processing and storage, is described. The majority of mass spectrometer data files are compatible with RADARS, for consistent processing. The system automatically takes unprocessed data files, identifies proteins via in silico database searching, then stores the processed data and search results in a relational database suitable for customized reporting. The system is robust, used in 24/7 operation, accessible to multiple users of an intranet through a web browser, may be monitored by Virtual Private Network, and is secure. RADARS is scalable for use on one or many computers, and is suited to multiple processor systems. It can incorporate any local database in FASTA format, and can search protein and DNA databases online. A key feature is a suite of visualisation tools (many available gratis), allowing facile manipulation of spectra, by hand annotation, reanalysis, and access to all procedures. We also described the use of Sonar MS/MS, a novel, rapid search engine requiring 40 MB RAM per process for searches against a genomic or EST database translated in all six reading frames. RADARS reduces the cost of analysis by its efficient algorithms: Sonar MS/MS can identifiy proteins without accurate knowledge of the parent ion mass and without protein tags. Statistical scoring methods provide close-to-expert accuracy and brings robust data analysis to the non-expert user.

  10. Access to digital library databases in higher education: design problems and infrastructural gaps.

    Science.gov (United States)

    Oswal, Sushil K

    2014-01-01

    After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.

  11. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  12. Renewal-anomalous-heterogeneous files

    International Nuclear Information System (INIS)

    Flomenbom, Ophir

    2010-01-01

    Renewal-anomalous-heterogeneous files are solved. A simple file is made of Brownian hard spheres that diffuse stochastically in an effective 1D channel. Generally, Brownian files are heterogeneous: the spheres' diffusion coefficients are distributed and the initial spheres' density is non-uniform. In renewal-anomalous files, the distribution of waiting times for individual jumps is not exponential as in Brownian files, yet obeys: ψ α (t)∼t -1-α , 0 2 >, obeys, 2 >∼ 2 > nrml α , where 2 > nrml is the MSD in the corresponding Brownian file. This scaling is an outcome of an exact relation (derived here) connecting probability density functions of Brownian files and renewal-anomalous files. It is also shown that non-renewal-anomalous files are slower than the corresponding renewal ones.

  13. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  14. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    Science.gov (United States)

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  15. PC Graphic file programing

    International Nuclear Information System (INIS)

    Yang, Jin Seok

    1993-04-01

    This book gives description of basic of graphic knowledge and understanding and realization of graphic file form. The first part deals with graphic with graphic data, store of graphic data and compress of data, programing language such as assembling, stack, compile and link of program and practice and debugging. The next part mentions graphic file form such as Mac paint file, GEM/IMG file, PCX file, GIF file, and TIFF file, consideration of hardware like mono screen driver and color screen driver in high speed, basic conception of dithering and conversion of formality.

  16. The WONP-NURT corpus as nuclear knowledge base for text mining in the INIS database

    International Nuclear Information System (INIS)

    Guerra Valdes, R.

    2011-01-01

    In the present work the WONP-NURT corpus is taken as knowledge base for text mining in the INIS database. Main components of the information processing system, as well as computational methods for content analysis of INIS database record files are described. Results of the content analysis of the WONP-NURT corpus are reported. Furthermore, results of two comparative text mining studies in the INIS database are also shown. The first one explores 10 research areas in the more familiar nearest range of WONP-NURT corpus, while the second one surveys 15 regions in the more exotic far range. The results provide new elements to asses the significance of the WONP-NURT corpus in the context of the current state of nuclear science and technology research areas. (Author)

  17. TIGER/Line Shapefile, 2012, Series Information File for the 2010 nation, 2010 Census 5-Digit ZIP Code Tabulation Area (ZCTA5) National

    Data.gov (United States)

    US Census Bureau, Department of Commerce — The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master...

  18. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they

  19. Experiences on File Systems: Which is the best file system for you?

    CERN Document Server

    Blomer, J

    2015-01-01

    The distributed file system landscape is scattered. Besides a plethora of research file systems, there is also a large number of production grade file systems with various strengths and weaknesses. The file system, as an abstraction of permanent storage, is appealing because it provides application portability and integration with legacy and third-party applications, including UNIX utilities. On the other hand, the general and simple file system interface makes it notoriously difficult for a distributed file system to perform well under a variety of different workloads. This contribution provides a taxonomy of commonly used distributed file systems and points out areas of research and development that are particularly important for high-energy physics.

  20. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database.

    Science.gov (United States)

    Scofield, Patricia A; Smith, Linda L; Johnson, David N

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.

  1. Monitoring of small laboratory animal experiments by a designated web-based database.

    Science.gov (United States)

    Frenzel, T; Grohmann, C; Schumacher, U; Krüll, A

    2015-10-01

    Multiple-parametric small animal experiments require, by their very nature, a sufficient number of animals which may need to be large to obtain statistically significant results.(1) For this reason database-related systems are required to collect the experimental data as well as to support the later (re-) analysis of the information gained during the experiments. In particular, the monitoring of animal welfare is simplified by the inclusion of warning signals (for instance, loss in body weight >20%). Digital patient charts have been developed for human patients but are usually not able to fulfill the specific needs of animal experimentation. To address this problem a unique web-based monitoring system using standard MySQL, PHP, and nginx has been created. PHP was used to create the HTML-based user interface and outputs in a variety of proprietary file formats, namely portable document format (PDF) or spreadsheet files. This article demonstrates its fundamental features and the easy and secure access it offers to the data from any place using a web browser. This information will help other researchers create their own individual databases in a similar way. The use of QR-codes plays an important role for stress-free use of the database. We demonstrate a way to easily identify all animals and samples and data collected during the experiments. Specific ways to record animal irradiations and chemotherapy applications are shown. This new analysis tool allows the effective and detailed analysis of huge amounts of data collected through small animal experiments. It supports proper statistical evaluation of the data and provides excellent retrievable data storage. © The Author(s) 2015.

  2. Storing files in a parallel computing system using list-based index to identify replica files

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Zhang, Zhenhua; Grider, Gary

    2015-07-21

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value for one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.

  3. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  4. TRANSNET -- access to radioactive and hazardous materials transportation codes and databases

    International Nuclear Information System (INIS)

    Cashwell, J.W.

    1992-01-01

    TRANSNET has been developed and maintained by Sandia National Laboratories under the sponsorship of the United States Department of Energy (DOE) Office of Environmental Restoration and Waste Management to permit outside access to computerized routing, risk and systems analysis models, and associated databases. The goal of the TRANSNET system is to enable transfer of transportation analytical methods and data to qualified users by permitting direct, timely access to the up-to-date versions of the codes and data. The TRANSNET facility comprises a dedicated computer with telephone ports on which these codes and databases are adapted, modified, and maintained. To permit the widest spectrum of outside users, TRANSNET is designed to minimize hardware and documentation requirements. The user is thus required to have an IBM-compatible personal computer, Hayes-compatible modem with communications software, and a telephone. Maintenance and operation of the TRANSNET facility are underwritten by the program sponsor(s) as are updates to the respective models and data, thus the only charges to the user of the system are telephone hookup charges. TRANSNET provides access to the most recent versions of the models and data developed by or for Sandia National Laboratories. Code modifications that have been made since the last published documentation are noted to the user on the introductory screens. User friendly interfaces have been developed for each of the codes and databases on TRANSNET. In addition, users are provided with default input data sets for typical problems which can either be used directly or edited. Direct transfers of analytical or data files between codes are provided to permit the user to perform complex analyses with a minimum of input. Recent developments to the TRANSNET system include use of the system to directly pass data files between both national and international users as well as development and integration of graphical depiction techniques

  5. The Ruby UCSC API: accessing the UCSC genome database using Ruby.

    Science.gov (United States)

    Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro

    2012-09-21

    The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  6. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Science.gov (United States)

    2012-01-01

    Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508

  7. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    Directory of Open Access Journals (Sweden)

    Mishima Hiroyuki

    2012-09-01

    Full Text Available Abstract Background The University of California, Santa Cruz (UCSC genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser and several means for programmatic queries. A simple application programming interface (API in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby. Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  8. The Master Lens Database and The Orphan Lenses Project

    Science.gov (United States)

    Moustakas, Leonidas

    2012-10-01

    Strong gravitational lenses are uniquely suited for the study of dark matter structure and substructure within massive halos of many scales, act as gravitational telescopes for distant faint objects, and can give powerful and competitive cosmological constraints. While hundreds of strong lenses are known to date, spanning five orders of magnitude in mass scale, thousands will be identified this decade. To fully exploit the power of these objects presently, and in the near future, we are creating the Master Lens Database. This is a clearinghouse of all known strong lens systems, with a sophisticated and modern database of uniformly measured and derived observational and lens-model derived quantities, using archival Hubble data across several instruments. This Database enables new science that can be done with a comprehensive sample of strong lenses. The operational goal of this proposal is to develop the process and the code to semi-automatically stage Hubble data of each system, create appropriate masks of the lensing objects and lensing features, and derive gravitational lens models, to provide a uniform and fairly comprehensive information set that is ingested into the Database. The scientific goal for this team is to use the properties of the ensemble of lenses to make a new study of the internal structure of lensing galaxies, and to identify new objects that show evidence of strong substructure lensing, for follow-up study. All data, scripts, masks, model setup files, and derived parameters, will be public, and free. The Database will be accessible online and through a sophisticated smartphone application, which will also be free.

  9. The ChArMEx database

    Science.gov (United States)

    Ferré, Hélène; Descloitres, Jacques; Fleury, Laurence; Boichard, Jean-Luc; Brissebrat, Guillaume; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2013-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and falls within the scope of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - Forms to document observations or products that will be provided to the database in compliance with metadata international standards (ISO 19115-19139; INSPIRE; Global Change Master Directory Thesaurus). - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - A shopping-cart web interface to order in situ data files. At present datasets from the background monitoring station of Ersa, Cape Corsica and from the 2012 ChArMEx pre-campaign are available. - A user-friendly access to satellite products

  10. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  11. Global Ocean Surface Water Partial Pressure of CO2 Database: Measurements Performed During 1968-2007 (Version 2007)

    Energy Technology Data Exchange (ETDEWEB)

    Kozyr, Alex [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Carbon Dioxide Information Analysis Center

    2008-09-30

    More than 4.1 million measurements of surface water partial pressure of CO2 obtained over the global oceans during 1968-2007 are listed in the Lamont-Doherty Earth Observatory (LDEO) database, which includes open ocean and coastal water measurements. The data assembled include only those measured by equilibrator-CO2 analyzer systems and have been quality-controlled based on the stability of the system performance, the reliability of calibrations for CO2 analysis, and the internal consistency of data. To allow re-examination of the data in the future, a number of measured parameters relevant to pCO2 measurements are listed. The overall uncertainty for the pCO2 values listed is estimated to be ± 2.5 µatm on the average. For simplicity and for ease of reference, this version is referred to as 2007, meaning that data collected through 31 December 2007 has been included. It is our intention to update this database annually. There are 37 new cruise/ship files in this update. In addition, some editing has been performed on existing files so this should be considered a V2007 file. Also we have added a column reporting the partial pressure of CO2 in seawater in units of Pascals. The data presented in this database include the analyses of partial pressure of CO2 (pCO2), sea surface temperature (SST), sea surface salinity (SSS), pressure of the equilibration, and barometric pressure in the outside air from the ship’s observation system. The global pCO2 data set is available free of charge as a numeric data package (NDP) from the Carbon Dioxide Information Analysis Center (CDIAC). The NDP consists of the oceanographic data files and this printed documentation, which describes the procedures and methods used to obtain the data.

  12. The new NIST atomic spectra database

    International Nuclear Information System (INIS)

    Kelleher, D.E.; Martin, W.C.; Wiese, W.L.; Sugar, J.; Fuhr, J.R.; Olsen, K.; Musgrove, A.; Mohr, P.J.; Reader, J.; Dalton, G.R.

    1999-01-01

    The new atomic spectra database (ASD), Version 2.0, of the National Institute of Standards and Technology (NIST) contains significantly more data and covers a wider range of atomic and ionic transitions and energy levels than earlier versions. All data are integrated. It also has a new user interface and search engine. ASD contains spectral reference data which have been critically evaluated and compiled by NIST. Version 2.0 contains data on 900 spectra, with about 70000 energy levels and 91000 lines ranging from about 1 Aangstroem to 200 micrometers, roughly half of which have transition probabilities with estimated uncertainties. References to the NIST compilations and original data sources are listed in the ASD bibliography. A detailed ''Help'' file serves as a user's manual, and full search and filter capabilities are provided. (orig.)

  13. PCF File Format.

    Energy Technology Data Exchange (ETDEWEB)

    Thoreson, Gregory G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.

  14. Toward public volume database management: a case study of NOVA, the National Online Volumetric Archive

    Science.gov (United States)

    Fletcher, Alex; Yoo, Terry S.

    2004-04-01

    Public databases today can be constructed with a wide variety of authoring and management structures. The widespread appeal of Internet search engines suggests that public information be made open and available to common search strategies, making accessible information that would otherwise be hidden by the infrastructure and software interfaces of a traditional database management system. We present the construction and organizational details for managing NOVA, the National Online Volumetric Archive. As an archival effort of the Visible Human Project for supporting medical visualization research, archiving 3D multimodal radiological teaching files, and enhancing medical education with volumetric data, our overall database structure is simplified; archives grow by accruing information, but seldom have to modify, delete, or overwrite stored records. NOVA is being constructed and populated so that it is transparent to the Internet; that is, much of its internal structure is mirrored in HTML allowing internet search engines to investigate, catalog, and link directly to the deep relational structure of the collection index. The key organizational concept for NOVA is the Image Content Group (ICG), an indexing strategy for cataloging incoming data as a set structure rather than by keyword management. These groups are managed through a series of XML files and authoring scripts. We cover the motivation for Image Content Groups, their overall construction, authorship, and management in XML, and the pilot results for creating public data repositories using this strategy.

  15. The ChArMEx database

    Science.gov (United States)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    observations or products that will be provided to the database. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - A shopping-cart web interface to order in situ data files. - A web interface to select and access to homogenized datasets. Interoperability between the two data centres is being set up using the OPEnDAP protocol. The data portal will soon propose a user-friendly access to satellite products managed by the ICARE data centre (SEVIRI, TRIMM, PARASOL...). In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 and 2013 campaigns, a day-to-day chart and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.

  16. The HyMeX database

    Science.gov (United States)

    Brissebrat, Guillaume; Mastrorillo, Laurence; Ramage, Karim; Boichard, Jean-Luc; Cloché, Sophie; Fleury, Laurence; Klenov, Ludmila; Labatut, Laurent; Mière, Arnaud

    2013-04-01

    measured parameters, by instruments or by platform type. - Forms to document observations or products that will be provided to the database. - A shopping-cart web interface to order in situ data files. - Ftp facilities to access gridded data. The website will soon propose new facilities. Many in situ datasets have been homogenized and inserted in a relational database yet, in order to enable more accurate data selection and download of different datasets in a shared format. Interoperability between the two data centres will be enhanced by the OpenDAP communication protocol associated with the Thredds catalogue software, which may also be implemented in other data centres that manage data of interest for the HyMeX project. In order to meet the operational needs for the HyMeX 2012 campaigns, a day-to-day quick look and report display website has been developed too: http://sop.hymex.org. It offers a convenient way to browse meteorological conditions and data during the campaign periods.

  17. In-database processing of a large collection of remote sensing data: applications and implementation

    Science.gov (United States)

    Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina

    2016-04-01

    Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability

  18. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    CERN Document Server

    Guijarro, Manuel

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over.

  19. Hierarchical Clustering of Large Databases and Classification of Antibiotics at High Noise Levels

    Directory of Open Access Journals (Sweden)

    Alexander V. Yarkov

    2008-12-01

    Full Text Available A new algorithm for divisive hierarchical clustering of chemical compounds based on 2D structural fragments is suggested. The algorithm is deterministic, and given a random ordering of the input, will always give the same clustering and can process a database up to 2 million records on a standard PC. The algorithm was used for classification of 1,183 antibiotics mixed with 999,994 random chemical structures. Similarity threshold, at which best separation of active and non active compounds took place, was estimated as 0.6. 85.7% of the antibiotics were successfully classified at this threshold with 0.4% of inaccurate compounds. A .sdf file was created with the probe molecules for clustering of external databases.

  20. Experience and lessons learnt from running high availability databases on network attached storage

    International Nuclear Information System (INIS)

    Guijarro, M; Gaspar, R

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over

  1. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...

  2. The Ensembl genome database project.

    Science.gov (United States)

    Hubbard, T; Barker, D; Birney, E; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Huminiecki, L; Kasprzyk, A; Lehvaslaiho, H; Lijnzaad, P; Melsopp, C; Mongin, E; Pettett, R; Pocock, M; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Clamp, M

    2002-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.

  3. Text File Comparator

    Science.gov (United States)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  4. Distribution of immunodeficiency fact files with XML – from Web to WAP

    Directory of Open Access Journals (Sweden)

    Riikonen Pentti

    2005-06-01

    Full Text Available Abstract Background Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. Methods Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML, that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. Results IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR available at http://bioinf.uta.fi/idr/. A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. Conclusion The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at http://bioinf.uta.fi/idml/.

  5. National Library of Norway's new database of 22 manuscript maps concerning the Swedish King Charles XII's campaign in Norway in 1716 and 1718

    Directory of Open Access Journals (Sweden)

    Benedicte Gamborg Brisa

    2003-03-01

    Full Text Available The National Library of Norway is planning to digitise approximately 1,500 manuscript maps. Two years ago we started working on a pilot project, and for this purpose we chose 22 maps small enough to be photographed in one piece. We made slides 6 x 7 cm in size, converted the slides into PhotoCDs and used four different resolutions on JPEG-files. To avoid large file sizes, we had to divide the version with the biggest resolution into four pieces. The preliminary work was done in Photoshop, the database on the web is made in Oracle. You can click on the map to zoom. Norwegians and probably Swedes during the Great Northern War drew the 22 maps when the Swedish King Charles XII in 1716 and 1718 unsuccessfully attempted to conquer Norway. The database is now accessible on the National Library of Norway's web site. The database is in Norwegian, but we are working on an English version as well. The maps are searchable on different topics, countries, counties, geographical names, shelfmarks or a combination of these. We are planning to expand the database to other manuscript maps later. This is the reason why it is possible to search for obvious subjects as Charles XII and the Great Northern War.

  6. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  7. Status and evaluation methods of JENDL fusion file and JENDL PKA/KERMA file

    International Nuclear Information System (INIS)

    Chiba, S.; Fukahori, T.; Shibata, K.; Yu Baosheng; Kosako, K.

    1997-01-01

    The status of evaluated nuclear data in the JENDL fusion file and PKA/KERMA file is presented. The JENDL fusion file was prepared in order to improve the quality of the JENDL-3.1 data especially on the double-differential cross sections (DDXs) of secondary neutrons and gamma-ray production cross sections, and to provide DDXs of secondary charged particles (p, d, t, 3 He and α-particle) for the calculation of PKA and KERMA factors. The JENDL fusion file contains evaluated data of 26 elements ranging from Li to Bi. The data in JENDL fusion file reproduce the measured data on neutron and charged-particle DDXs and also on gamma-ray production cross sections. Recoil spectra in PKA/KERMA file were calculated from secondary neutron and charged-particle DDXs contained in the fusion file with two-body reaction kinematics. The data in the JENDL fusion file and PKA/KERMA file were compiled in ENDF-6 format with an MF=6 option to store the DDX data. (orig.)

  8. A comparison of three design tree based search algorithms for the detection of engineering parts constructed with CATIA V5 in large databases

    Directory of Open Access Journals (Sweden)

    Robin Roj

    2014-07-01

    Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.

  9. A cloud-based multimodality case file for mobile devices.

    Science.gov (United States)

    Balkman, Jason D; Loehfelm, Thomas W

    2014-01-01

    Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.

  10. Quebec Trophoblastic Disease Registry: how to make an easy-to-use dynamic database.

    Science.gov (United States)

    Sauthier, Philippe; Breguet, Magali; Rozenholc, Alexandre; Sauthier, Michaël

    2015-05-01

    To create an easy-to-use dynamic database designed specifically for the Quebec Trophoblastic Disease Registry (RMTQ). It is now well established that much of the success in managing trophoblastic diseases comes from the development of national and regional reference centers. Computerized databases allow the optimal use of data stored in these centers. We have created an electronic data registration system by producing a database using FileMaker Pro 12. It uses 11 external tables associated with a unique identification number for each patient. Each table allows specific data to be recorded, incorporating demographics, diagnosis, automated staging, laboratory values, pathological diagnosis, and imaging parameters. From January 1, 2009, to December 31, 2013, we used our database to register 311 patients with 380 diseases and have seen a 39.2% increase in registrations each year between 2009 and 2012. This database allows the automatic generation of semilogarithmic curves, which take into account β-hCG values as a function of time, complete with graphic markers for applied treatments (chemotherapy, radiotherapy, or surgery). It generates a summary sheet for a synthetic vision in real time. We have created, at a low cost, an easy-to-use database specific to trophoblastic diseases that dynamically integrates staging and monitoring. We propose a 10-step procedure for a successful trophoblastic database. It improves patient care, research, and education on trophoblastic diseases in Quebec and leads to an opportunity for collaboration on a national Canadian registry.

  11. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    Science.gov (United States)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  12. Influence of File Motion on Shaping, Apical Debris Extrusion and Dentinal Defects: A Critical Review.

    Science.gov (United States)

    Pedrinha, Victor Feliz; Brandão, Juliana Melo da Silva; Pessoa, Oscar Faciola; Rodrigues, Patrícia de Almeida

    2018-01-01

    Advances in endodontics have enabled the evolution of file manufacturing processes, improving performance beyond that of conventional files. In the present study, systems manufactured using state of the art methods and possessing special properties related to NiTi alloys ( i.e ., CM-Wire, M-Wire and R-Phase) were selected. The aim of this review was to provide a detailed analysis of the literature about the relationship between recently introduced NiTi files with different movement kinematics and shaping ability, apical extrusion of debris and dentin defects in root canal preparations. From March 2016 to January 2017, electronic searches were conducted in the PubMed and SCOPUS databases for articles published since January 2010. In vitro studies performed on extracted human teeth and published in English were considered for this review. Based on the inclusion criteria, 71 papers were selected for the analysis of full-text copies. Specific analysis was performed on 45 articles describing the effects of reciprocating, continuous and adaptive movements on the WaveOne Gold, Reciproc, HyFlex CM and Twisted File Adaptive systems. A wide range of testing conditions and methodologies have been used to compare the systems. Due the controversies among the results, the characteristics of the files used, such as their design and alloys, appear to be inconsistent to determine the best approach.

  13. Validation and application of a physics database for fast reactor fuel cycle analysis

    International Nuclear Information System (INIS)

    McKnight, R.D.; Stillman, J.A.; Toppel, B.J.; Khalil, H.S.

    1994-01-01

    An effort has been made to automate the execution of fast reactor fuel cycle analysis, using EBR-II as a demonstration vehicle, and to validate the analysis results for application to the IFR closed fuel cycle demonstration at EBR-II and its fuel cycle facility. This effort has included: (1) the application of the standard ANL depletion codes to perform core-follow analyses for an extensive series of EBR-II runs, (2) incorporation of the EBR-II data into a physics database, (3) development and verification of software to update, maintain and verify the database files, (4) development and validation of fuel cycle models and methodology, (5) development and verification of software which utilizes this physics database to automate the application of the ANL depletion codes, methods and models to perform the core-follow analysis, and (6) validation studies of the ANL depletion codes and of their application in support of anticipated near-term operations in EBR-II and the Fuel Cycle Facility. Results of the validation tests indicate the physics database and associated analysis codes and procedures are adequate to predict required quantities in support of early phases of FCF operations

  14. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  15. Italian Present-day Stress Indicators: IPSI Database

    Science.gov (United States)

    Mariucci, M. T.; Montone, P.

    2017-12-01

    In Italy, since the 90s of the last century, researches concerning the contemporary stress field have been developing at Istituto Nazionale di Geofisica e Vulcanologia (INGV) with local and regional scale studies. Throughout the years many data have been analysed and collected: now they are organized and available for an easy end-use online. IPSI (Italian Present-day Stress Indicators) database, is the first geo-referenced repository of information on the crustal present-day stress field maintained at INGV through a web application database and website development by Gabriele Tarabusi. Data consist of horizontal stress orientations analysed and compiled in a standardized format and quality-ranked for reliability and comparability on a global scale with other database. Our first database release includes 855 data records updated to December 2015. Here we present an updated version that will be released in 2018, after new earthquake data entry up to December 2017. The IPSI web site (http://ipsi.rm.ingv.it/) allows accessing data on a standard map viewer and choose which data (category and/or quality) to plot easily. The main information of each single element (type, quality, orientation) can be viewed simply going over the related symbol, all the information appear by clicking the element. At the same time, simple basic information on the different data type, tectonic regime assignment, quality ranking method are available with pop-up windows. Data records can be downloaded in some common formats, moreover it is possible to download a file directly usable with SHINE, a web based application to interpolate stress orientations (http://shine.rm.ingv.it). IPSI is mainly conceived for those interested in studying the characters of Italian peninsula and surroundings although Italian data are part of the World Stress Map (http://www.world-stress-map.org/) as evidenced by many links that redirect to this database for more details on standard practices in this field.

  16. The Amma-Sat Database

    Science.gov (United States)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    a regular grid with a spatial resolution compatible with the spatial variability of the geophysical parameter. Data are stored in NetCDF files to facilitate their use. Satellite products can be selected using several spatial and temporal criteria and ordered through a web interface developed in PHP-MySQL. More common means of access are also available such as direct FTP or NFS access for identified users. A Live Access Server allows quick visualization of the data. A meta-data catalogue based on the Directory Interchange Format manages the documentation of each satellite product. The database is currently under development, but some products are already available. The database will be complete by the end of 2005.

  17. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  18. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    International Nuclear Information System (INIS)

    Schreiner, Steffen; Banerjee, Subho Sankar; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Bagnasco, Stefano; Zhu Jianlin

    2011-01-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  19. Generic Natural Systems Evaluation - Thermodynamic Database Development and Data Management

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T W; Sutton, M

    2011-09-19

    , meaning that they use a large body of thermodynamic data, generally from a supporting database file, to sort out the various important reactions from a wide spectrum of possibilities, given specified inputs. Usually codes of this kind are used to construct models of initial aqueous solutions that represent initial conditions for some process, although sometimes these calculations also represent a desired end point. Such a calculation might be used to determine the major chemical species of a dissolved component, the solubility of a mineral or mineral-like solid, or to quantify deviation from equilibrium in the form of saturation indices. Reactive transport codes such as TOUGHREACT and NUFT generally require the user to determine which chemical species and reactions are important, and to provide the requisite set of information including thermodynamic data in an input file. Usually this information is abstracted from the output of a geochemical modeling code and its supporting thermodynamic data file. The Yucca Mountain Project (YMP) developed two qualified thermodynamic databases to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer's (1991) equations. A 25 C-only database with similarities to the latter was also developed for the Waste Isolation Pilot Plant (WIPP, cf. Xiong, 2005). The NAGRA/PSI database (Hummel et al., 2002) was developed to support repository studies in Europe. The YMP databases are often used in non-repository studies, including studies of geothermal systems (e.g., Wolery and Carroll, 2010) and CO2 sequestration (e.g., Aines et al., 2011).

  20. Generic Natural Systems Evaluation - Thermodynamic Database Development and Data Management

    International Nuclear Information System (INIS)

    Wolery, T.W.; Sutton, M.

    2011-01-01

    they use a large body of thermodynamic data, generally from a supporting database file, to sort out the various important reactions from a wide spectrum of possibilities, given specified inputs. Usually codes of this kind are used to construct models of initial aqueous solutions that represent initial conditions for some process, although sometimes these calculations also represent a desired end point. Such a calculation might be used to determine the major chemical species of a dissolved component, the solubility of a mineral or mineral-like solid, or to quantify deviation from equilibrium in the form of saturation indices. Reactive transport codes such as TOUGHREACT and NUFT generally require the user to determine which chemical species and reactions are important, and to provide the requisite set of information including thermodynamic data in an input file. Usually this information is abstracted from the output of a geochemical modeling code and its supporting thermodynamic data file. The Yucca Mountain Project (YMP) developed two qualified thermodynamic databases to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer's (1991) equations. A 25 C-only database with similarities to the latter was also developed for the Waste Isolation Pilot Plant (WIPP, cf. Xiong, 2005). The NAGRA/PSI database (Hummel et al., 2002) was developed to support repository studies in Europe. The YMP databases are often used in non-repository studies, including studies of geothermal systems (e.g., Wolery and Carroll, 2010) and CO2 sequestration (e.g., Aines et al., 2011).

  1. Development of a relational database to capture and merge clinical history with the quantitative results of radionuclide renography.

    Science.gov (United States)

    Folks, Russell D; Savir-Baruch, Bital; Garcia, Ernest V; Verdes, Liudmila; Taylor, Andrew T

    2012-12-01

    Our objective was to design and implement a clinical history database capable of linking to our database of quantitative results from (99m)Tc-mercaptoacetyltriglycine (MAG3) renal scans and export a data summary for physicians or our software decision support system. For database development, we used a commercial program. Additional software was developed in Interactive Data Language. MAG3 studies were processed using an in-house enhancement of a commercial program. The relational database has 3 parts: a list of all renal scans (the RENAL database), a set of patients with quantitative processing results (the Q2 database), and a subset of patients from Q2 containing clinical data manually transcribed from the hospital information system (the CLINICAL database). To test interobserver variability, a second physician transcriber reviewed 50 randomly selected patients in the hospital information system and tabulated 2 clinical data items: hydronephrosis and presence of a current stent. The CLINICAL database was developed in stages and contains 342 fields comprising demographic information, clinical history, and findings from up to 11 radiologic procedures. A scripted algorithm is used to reliably match records present in both Q2 and CLINICAL. An Interactive Data Language program then combines data from the 2 databases into an XML (extensible markup language) file for use by the decision support system. A text file is constructed and saved for review by physicians. RENAL contains 2,222 records, Q2 contains 456 records, and CLINICAL contains 152 records. The interobserver variability testing found a 95% match between the 2 observers for presence or absence of ureteral stent (κ = 0.52), a 75% match for hydronephrosis based on narrative summaries of hospitalizations and clinical visits (κ = 0.41), and a 92% match for hydronephrosis based on the imaging report (κ = 0.84). We have developed a relational database system to integrate the quantitative results of MAG3 image

  2. FishPathogens.eu/vhsv: a user-friendly viral haemorrhagic septicaemia virus isolate and sequence database

    DEFF Research Database (Denmark)

    Jonstrup, Søren Peter; Gray, Tanya; Kahns, Søren

    2009-01-01

    A database has been created, http://www.Fish Pathogens.eu, with the aim of providing a single repository for collating important information on significant pathogens of aquaculture, relevant to their control and management. This database will be developed, maintained and managed as part of the Eu......A database has been created, http://www.Fish Pathogens.eu, with the aim of providing a single repository for collating important information on significant pathogens of aquaculture, relevant to their control and management. This database will be developed, maintained and managed as part...... of the European Community Reference Laboratory for Fish Diseases function. This concept has been initially developed for viral haemorrhagic septicaemia virus and will be extended in future to include information on other significant aquaculture pathogens. Information included for each isolate comprises sequence...... to obtain data from any selected part of the genome of interest. The output of the sequence search can be readily retrieved as a FASTA file ready to be imported into a sequence alignment tool of choice, facilitating further molecular epidemiological study....

  3. Integrated olfactory receptor and microarray gene expression databases

    Directory of Open Access Journals (Sweden)

    Crasto Chiquito J

    2007-06-01

    Full Text Available Abstract Background Gene expression patterns of olfactory receptors (ORs are an important component of the signal encoding mechanism in the olfactory system since they determine the interactions between odorant ligands and sensory neurons. We have developed the Olfactory Receptor Microarray Database (ORMD to house OR gene expression data. ORMD is integrated with the Olfactory Receptor Database (ORDB, which is a key repository of OR gene information. Both databases aim to aid experimental research related to olfaction. Description ORMD is a Web-accessible database that provides a secure data repository for OR microarray experiments. It contains both publicly available and private data; accessing the latter requires authenticated login. The ORMD is designed to allow users to not only deposit gene expression data but also manage their projects/experiments. For example, contributors can choose whether to make their datasets public. For each experiment, users can download the raw data files and view and export the gene expression data. For each OR gene being probed in a microarray experiment, a hyperlink to that gene in ORDB provides access to genomic and proteomic information related to the corresponding olfactory receptor. Individual ORs archived in ORDB are also linked to ORMD, allowing users access to the related microarray gene expression data. Conclusion ORMD serves as a data repository and project management system. It facilitates the study of microarray experiments of gene expression in the olfactory system. In conjunction with ORDB, ORMD integrates gene expression data with the genomic and functional data of ORs, and is thus a useful resource for both olfactory researchers and the public.

  4. Integration of Narrative Processing, Data Fusion, and Database Updating Techniques in an Automated System.

    Science.gov (United States)

    1981-10-29

    are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie

  5. HUD GIS Boundary Files

    Data.gov (United States)

    Department of Housing and Urban Development — The HUD GIS Boundary Files are intended to supplement boundary files available from the U.S. Census Bureau. The files are for community planners interested in...

  6. Bibliographical database of radiation biological dosimetry and risk assessment: Part 1, through June 1988

    Energy Technology Data Exchange (ETDEWEB)

    Straume, T.; Ricker, Y.; Thut, M.

    1988-08-29

    This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. It should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database.

  7. Bibliographical database of radiation biological dosimetry and risk assessment: Part 1, through June 1988

    International Nuclear Information System (INIS)

    Straume, T.; Ricker, Y.; Thut, M.

    1988-01-01

    This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. It should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database

  8. 33 CFR 148.246 - When is a document considered filed and where should I file it?

    Science.gov (United States)

    2010-07-01

    ... filed and where should I file it? 148.246 Section 148.246 Navigation and Navigable Waters COAST GUARD... Formal Hearings § 148.246 When is a document considered filed and where should I file it? (a) If a document to be filed is submitted by mail, it is considered filed on the date it is postmarked. If a...

  9. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  10. An event database for rotational seismology

    Science.gov (United States)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  11. Protecting your files on the DFS file system

    CERN Multimedia

    Computer Security Team

    2011-01-01

    The Windows Distributed File System (DFS) hosts user directories for all NICE users plus many more data.    Files can be accessed from anywhere, via a dedicated web portal (http://cern.ch/dfs). Due to the ease of access to DFS with in CERN it is of utmost importance to properly protect access to sensitive data. As the use of DFS access control mechanisms is not obvious to all users, passwords, certificates or sensitive files might get exposed. At least this happened in past to the Andrews File System (AFS) - the Linux equivalent to DFS) - and led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed recently to apply more stringent protections to all DFS user folders. The goal of this data protection policy is to assist users in pro...

  12. Protecting your files on the AFS file system

    CERN Multimedia

    2011-01-01

    The Andrew File System is a world-wide distributed file system linking hundreds of universities and organizations, including CERN. Files can be accessed from anywhere, via dedicated AFS client programs or via web interfaces that export the file contents on the web. Due to the ease of access to AFS it is of utmost importance to properly protect access to sensitive data in AFS. As the use of AFS access control mechanisms is not obvious to all users, passwords, private SSH keys or certificates have been exposed in the past. In one specific instance, this also led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed in April 2010 to apply more stringent folder protections to all AFS user folders. The goal of this data protection policy is to assist users in...

  13. Zebra: A striped network file system

    Science.gov (United States)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  14. Shared Bioinformatics Databases within the Unipro UGENE Platform

    Directory of Open Access Journals (Sweden)

    Protsyuk Ivan V.

    2015-03-01

    Full Text Available Unipro UGENE is an open-source bioinformatics toolkit that integrates popular tools along with original instruments for molecular biologists within a unified user interface. Nowadays, most bioinformatics desktop applications, including UGENE, make use of a local data model while processing different types of data. Such an approach causes an inconvenience for scientists working cooperatively and relying on the same data. This refers to the need of making multiple copies of certain files for every workplace and maintaining synchronization between them in case of modifications. Therefore, we focused on delivering a collaborative work into the UGENE user experience. Currently, several UGENE installations can be connected to a designated shared database and users can interact with it simultaneously. Such databases can be created by UGENE users and be used at their discretion. Objects of each data type, supported by UGENE such as sequences, annotations, multiple alignments, etc., can now be easily imported from or exported to a remote storage. One of the main advantages of this system, compared to existing ones, is the almost simultaneous access of client applications to shared data regardless of their volume. Moreover, the system is capable of storing millions of objects. The storage itself is a regular database server so even an inexpert user is able to deploy it. Thus, UGENE may provide access to shared data for users located, for example, in the same laboratory or institution. UGENE is available at: http://ugene.net/download.html.

  15. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  16. THEREDA. Thermodynamic reference database. Summary of final report

    Energy Technology Data Exchange (ETDEWEB)

    Altmaier, Marcus; Bube, Christiane; Marquardt, Christian [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany). Institut fuer Nukleare Entsorgung; Brendler, Vinzenz; Richter, Anke [Helmholtz-Zentrum Dresden-Rossendorf (Germany). Inst. fuer Radiochemie; Moog, Helge C.; Scharge, Tina [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Voigt, Wolfgang [TU Bergakademie Freiburg (Germany). Inst. fuer Anorganische Chemie; Wilhelm, Stefan [AF-Colenco AG, Baden (Switzerland)

    2011-03-15

    A long term safety assessment of a repository for radioactive waste requires evidence, that all relevant processes are known and understood, which might have a significant positive or negative impact on its safety. In 2002, a working group of five institutions was established to create a common thermodynamic database for nuclear waste disposal in deep geological formations. The common database was named THEREDA: Thermodynamic Reference Database. The following institutions are members of the working group: Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiochemistry - Karlsruhe Institute of Technology, Institute for Nuclear Waste Disposal - Technische Universitaet Bergakademie Freiberg, Institute of Inorganic Chemistry - AF-Colenco AG, Baden, Switzerland, Department of Groundwater Protection and Waste Disposal - Gesellschaft fur Anlagen- und Reaktorsicherheit, Braunschweig. For the future it is intended that its usage becomes mandatory for geochemical model calculations for nuclear waste disposal in Germany. Furthermore, it was agreed that the new database should be established in accordance with the following guidelines: Long-term usability: The disposal of radioactive waste is a task encompassing decades. The database is projected to operate on a long-term basis. This has influenced the choice of software (which is open source), the documentation and the data structure. THEREDA is adapted to the present-day necessities and computational codes but also leaves many degrees of freedom for varying demands in the future. Easy access: The database is accessible via the World Wide Web for free. Applicability: To promote the usage of the database in a wide community, THEREDA is providing ready-to-use parameter files for the most common codes. These are at present: PHREEQC, EQ3/6, Geochemist's Workbench, and CHEMAPP. Internal consistency: It is distinguished between dependent and independent data. To ensure the required internal consistency of THEREDA, the

  17. THEREDA. Thermodynamic reference database. Summary of final report

    International Nuclear Information System (INIS)

    Altmaier, Marcus; Bube, Christiane; Marquardt, Christian; Voigt, Wolfgang

    2011-03-01

    A long term safety assessment of a repository for radioactive waste requires evidence, that all relevant processes are known and understood, which might have a significant positive or negative impact on its safety. In 2002, a working group of five institutions was established to create a common thermodynamic database for nuclear waste disposal in deep geological formations. The common database was named THEREDA: Thermodynamic Reference Database. The following institutions are members of the working group: Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiochemistry - Karlsruhe Institute of Technology, Institute for Nuclear Waste Disposal - Technische Universitaet Bergakademie Freiberg, Institute of Inorganic Chemistry - AF-Colenco AG, Baden, Switzerland, Department of Groundwater Protection and Waste Disposal - Gesellschaft fur Anlagen- und Reaktorsicherheit, Braunschweig. For the future it is intended that its usage becomes mandatory for geochemical model calculations for nuclear waste disposal in Germany. Furthermore, it was agreed that the new database should be established in accordance with the following guidelines: Long-term usability: The disposal of radioactive waste is a task encompassing decades. The database is projected to operate on a long-term basis. This has influenced the choice of software (which is open source), the documentation and the data structure. THEREDA is adapted to the present-day necessities and computational codes but also leaves many degrees of freedom for varying demands in the future. Easy access: The database is accessible via the World Wide Web for free. Applicability: To promote the usage of the database in a wide community, THEREDA is providing ready-to-use parameter files for the most common codes. These are at present: PHREEQC, EQ3/6, Geochemist's Workbench, and CHEMAPP. Internal consistency: It is distinguished between dependent and independent data. To ensure the required internal consistency of THEREDA, the

  18. KEGGtranslator: visualizing and converting the KEGG PATHWAY database to various formats.

    Science.gov (United States)

    Wrzodek, Clemens; Dräger, Andreas; Zell, Andreas

    2011-08-15

    The KEGG PATHWAY database provides a widely used service for metabolic and nonmetabolic pathways. It contains manually drawn pathway maps with information about the genes, reactions and relations contained therein. To store these pathways, KEGG uses KGML, a proprietary XML-format. Parsers and translators are needed to process the pathway maps for usage in other applications and algorithms. We have developed KEGGtranslator, an easy-to-use stand-alone application that can visualize and convert KGML formatted XML-files into multiple output formats. Unlike other translators, KEGGtranslator supports a plethora of output formats, is able to augment the information in translated documents (e.g. MIRIAM annotations) beyond the scope of the KGML document, and amends missing components to fragmentary reactions within the pathway to allow simulations on those. KEGGtranslator is freely available as a Java(™) Web Start application and for download at http://www.cogsys.cs.uni-tuebingen.de/software/KEGGtranslator/. KGML files can be downloaded from within the application. clemens.wrzodek@uni-tuebingen.de Supplementary data are available at Bioinformatics online.

  19. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  20. Current Comparative Table (CCT) automates customized searches of dynamic biological databases.

    Science.gov (United States)

    Landsteiner, Benjamin R; Olson, Michael R; Rutherford, Robert

    2005-07-01

    The Current Comparative Table (CCT) software program enables working biologists to automate customized bioinformatics searches, typically of remote sequence or HMM (hidden Markov model) databases. CCT currently supports BLAST, hmmpfam and other programs useful for gene and ortholog identification. The software is web based, has a BioPerl core and can be used remotely via a browser or locally on Mac OS X or Linux machines. CCT is particularly useful to scientists who study large sets of molecules in today's evolving information landscape because it color-codes all result files by age and highlights even tiny changes in sequence or annotation. By empowering non-bioinformaticians to automate custom searches and examine current results in context at a glance, CCT allows a remote database submission in the evening to influence the next morning's bench experiment. A demonstration of CCT is available at http://orb.public.stolaf.edu/CCTdemo and the open source software is freely available from http://sourceforge.net/projects/orb-cct.

  1. Development of an Internet-based data explorer for a samples databases: the example of the STRATFEED project

    Directory of Open Access Journals (Sweden)

    Dardenne P.

    2004-01-01

    Full Text Available A key aspect of the European STRATFEED project on developing and validating analytical methods to detect animal meal in feed was the creation of a samples bank. To manage the 2,500 samples that were stored in the samples bank, another important objective was to build a database and develop an Internet-based data explorer – the STRATFEED explorer – to enable all laboratories and manufacturers working in the feed sector to make use of the database. The concept developed for the STRATFEED project could be used for samples management in other projects and it is easily adapted to meet a variety of requirements. The STRATFEED explorer can now be run from the public website http://stratfeed.cra.wallonie.be. Each webpage of this application is described in a documentation file aimed at helping the user to explore the database.

  2. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  3. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  4. JENDL Dosimetry File

    International Nuclear Information System (INIS)

    Nakazawa, Masaharu; Iguchi, Tetsuo; Kobayashi, Katsuhei; Iwasaki, Shin; Sakurai, Kiyoshi; Ikeda, Yujiro; Nakagawa, Tsuneo.

    1992-03-01

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d, n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form. (author) 76 refs

  5. JENDL Dosimetry File

    Energy Technology Data Exchange (ETDEWEB)

    Nakazawa, Masaharu; Iguchi, Tetsuo [Tokyo Univ. (Japan). Faculty of Engineering; Kobayashi, Katsuhei [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Iwasaki, Shin [Tohoku Univ., Sendai (Japan). Faculty of Engineering; Sakurai, Kiyoshi; Ikeda, Yujior; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1992-03-15

    The JENDL Dosimetry File based on JENDL-3 was compiled and integral tests of cross section data were performed by the Dosimetry Integral Test Working Group of the Japanese Nuclear Data Committee. Data stored in the JENDL Dosimetry File are the cross sections and their covariance data for 61 reactions. The cross sections were mainly taken from JENDL-3 and the covariances from IRDF-85. For some reactions, data were adopted from other evaluated data files. The data are given in the neutron energy region below 20 MeV in both of point-wise and group-wise files in the ENDF-5 format. In order to confirm reliability of the data, several integral tests were carried out; comparison with the data in IRDF-85 and average cross sections measured in fission neutron fields, fast reactor spectra, DT neutron fields and Li(d,n) neutron fields. As a result, it has been found that the JENDL Dosimetry File gives better results than IRDF-85 but there are some problems to be improved in future. The contents of the JENDL Dosimetry File and the results of the integral tests are described in this report. All of the dosimetry cross sections are shown in a graphical form.

  6. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  7. APLIKASI ENKRIPSI DAN DEKRIPSI FILE DENGAN MENGGUNAKAN AES (ADVANCED ENCRYPTION STANDARD ALGORITMA RIJNDAEL PADA SISTEM OPERASI ANDROID

    Directory of Open Access Journals (Sweden)

    Langit Da Silva

    2015-04-01

    Full Text Available Rijndael algorithm is an algorithm that won the contest organized by NIST to replace DES algorithm known weaknesses . After winning the contest Rijndael algorithm , Rijndael lagoitma then serve as the AES ( Advanced Encryption Standard . This algorithm has been widely used in the encryption problem both for text , files , and databases . While Android is an open source operating system developed by Google . Currently Android operating system has become the operating system most widely used on smartphoanes . Smartphones now prevalent and had also been owned by many people because of its reliability. In this final project has been able to be made to solve the problem of software security file on the device that uses the Android operating system using AES ( Advanced Encryption Standard Rijndael algorithm . The method used in the design and manufacture of this software is the method GRAPPLE ( Guideliness for Rapid Application Engineering . The programming language used is Java . In the application can generate an encrypted file that can not be opened . To open the file , then the application can perform the decryption process . The parameters used in the analysis of this algorithm when used for encryption and decryption .

  8. Program za vođenje personalne evidencije / Application program for personnel files

    Directory of Open Access Journals (Sweden)

    Miroslav Stojanović

    2002-03-01

    Full Text Available U radu je opisan Program za multikorisnički rad u mrežnom okruženju, koji je namenjen za vođenje personalne evidencije u okviru jedinice. Program je razvijen tako da u mrežnom okruženju radi sa bazom podataka, Microsoft SQL Server, a kada je mreža nefunkcionalna da radi lokalno sa MS Access, i da omogući automatsko ažuriranje podataka iz lokalne u glavnu bazu podataka. Prva verzija programa razvijena je i nalazi se u fazi probnog rada. / The paper presents an application program for personnel files of a unit. The application works in multi-user and network environment. It is developed to work with the Microsoft SQL Server database in network environment and to work with the MS Access database when network is not functional and to allow data synchronization between the local and the main database. The alpha version of the application program is finished and it is on a test drive now.

  9. Development of a utility system for nuclear reaction data file: WinNRDF

    International Nuclear Information System (INIS)

    Aoyama, Shigeyoshi; Ohbayasi, Yosihide; Masui, Hiroshi; Chiba, Masaki; Kato, Kiyoshi; Ohnishi, Akira

    2000-01-01

    A utility system, WinNRDF, is developed for charged particle nuclear reaction data of NRDF (Nuclear Reaction Data File) on the Windows interface. By using this system, we can easily search the experimental data of a charged particle nuclear reaction in NRDF than old retrieval systems on the mainframe and also see graphically the experimental data on GUI (Graphical User Interface). We adopted a mechanism of making a new index of keywords to put to practical use of the time dependent properties of the NRDF database. (author)

  10. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  11. An event-oriented database for continuous data flows in the TJ-II environment

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E. [Asociacion Euratom/CIEMAT para Fusion Madrid, 28040 Madrid (Spain)], E-mail: edi.sanchez@ciemat.es; Pena, A. de la; Portas, A.; Pereira, A.; Vega, J. [Asociacion Euratom/CIEMAT para Fusion Madrid, 28040 Madrid (Spain); Neto, A.; Fernandes, H. [Associacao Euratom/IST, Centro de Fusao Nuclear, Avenue Rovisco Pais P-1049-001 Lisboa (Portugal)

    2008-04-15

    A new database for storing data related to the TJ-II experiment has been designed and implemented. It allows the storage of raw data not acquired during plasma shots, i.e. data collected continuously or between plasma discharges while testing subsystems (e.g. during neutral beam test pulses). This new database complements already existing ones by permitting the storage of raw data that are not classified by shot number. Rather these data are indexed according to a more general entity entitled event. An event is defined as any occurrence relevant to the TJ-II environment. Such occurrences are registered thus allowing relationships to be established between data acquisition, TJ-II control-system and diagnostic control-system actions. In the new database, raw data are stored in files on the TJ-II UNIX central server disks while meta-data are stored in Oracle tables thereby permitting fast data searches according to different criteria. In addition, libraries for registering data/events in the database from different subsystems within the laboratory local area network have been developed. Finally, a Shared Data Access System has been implemented for external access to data. It permits both new event-indexed as well as old data (indexed by shot number) to be read from a common event perspective.

  12. A File Archival System

    Science.gov (United States)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  13. TEDS-M 2008 User Guide for the International Database. Supplement 3: Variables Derived from the Educator and Future Teacher Data

    Science.gov (United States)

    Brese, Falk, Ed.

    2012-01-01

    This supplement contains documentation on all the derived variables contained in the TEDS-M educator and future teacher data files. These derived variables were used to report data in the TEDS-M international reports. The variables that constitute the scales and indices are made available as part of the TEDS-M International Database to be used in…

  14. Comparative evaluation of debris extruded apically by using, Protaper retreatment file, K3 file and H-file with solvent in endodontic retreatment

    Directory of Open Access Journals (Sweden)

    Chetna Arora

    2012-01-01

    Full Text Available Aim: The aim of this study was to evaluate the apical extrusion of debris comparing 2 engine driven systems and hand instrumentation technique during root canal retreatment. Materials and Methods: Forty five human permanent mandibular premolars were prepared using the step-back technique, obturated with gutta-percha/zinc oxide eugenol sealer and cold lateral condensation technique. The teeth were divided into three groups: Group A: Protaper retreatment file, Group B: K3, file Group C: H-file with tetrachloroethylene. All the canals were irrigated with 20ml distilled water during instrumentation. Debris extruded along with the irrigating solution during retreatment procedure was carefully collected in preweighed Eppendorf tubes. The tubes were stored in an incubator for 5 days, placed in a desiccator and then re-weighed. Weight of dry debris was calculated by subtracting the weight of the tube before instrumentation and from the weight of the tube after instrumentation. Data was analyzed using Two Way ANOVA and Post Hoc test. Results : There was statistically significant difference in the apical extrusion of debris between hand instrumentation and protaper retreatment file and K3 file. The amount of extruded debris caused by protaper retreatment file and K3 file instrumentation technique was not statistically significant. All the three instrumentation techniques produced apically extruded debris and irrigant. Conclusion: The best way to minimize the extrusion of debris is by adapting crown down technique therefore the use of rotary technique (Protaper retreatment file, K3 file is recommended.

  15. Computer Forensics Method in Analysis of Files Timestamps in Microsoft Windows Operating System and NTFS File System

    Directory of Open Access Journals (Sweden)

    Vesta Sergeevna Matveeva

    2013-02-01

    Full Text Available All existing file browsers displays 3 timestamps for every file in file system NTFS. Nowadays there are a lot of utilities that can manipulate temporal attributes to conceal the traces of file using. However every file in NTFS has 8 timestamps that are stored in file record and used in detecting the fact of attributes substitution. The authors suggest a method of revealing original timestamps after replacement and automated variant of it in case of a set of files.

  16. Volcanogenic Massive Sulfide Deposits of the World - Database and Grade and Tonnage Models

    Science.gov (United States)

    Mosier, Dan L.; Berger, Vladimir I.; Singer, Donald A.

    2009-01-01

    information in a useful form to policy makers. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to present the latest geologic information and newly developed grade and tonnage models for VMS deposits in digital form. This publication contains computer files with information on VMS deposits from around the world. It also presents new grade and tonnage models for three subtypes of VMS deposits and a text file allowing locations of all deposits to be plotted in geographic information system (GIS) programs. The data are presented in FileMaker Pro and text files to make the information available to a wider audience. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules used in this compilation. Next, we provide new grade and tonnage models and analysis of the information in the file. Finally, the fields of the data file are explained. Appendix A gives the summary statistics for the new grade-tonnage models and Appendix B displays the country codes used in the database.

  17. Dereplication of plant phenolics using a mass-spectrometry database independent method.

    Science.gov (United States)

    Borges, Ricardo M; Taujale, Rahil; de Souza, Juliana Santana; de Andrade Bezerra, Thaís; Silva, Eder Lana E; Herzog, Ronny; Ponce, Francesca V; Wolfender, Jean-Luc; Edison, Arthur S

    2018-05-29

    Dereplication, an approach to sidestep the efforts involved in the isolation of known compounds, is generally accepted as being the first stage of novel discoveries in natural product research. It is based on metabolite profiling analysis of complex natural extracts. To present the application of LipidXplorer for automatic targeted dereplication of phenolics in plant crude extracts based on direct infusion high-resolution tandem mass spectrometry data. LipidXplorer uses a user-defined molecular fragmentation query language (MFQL) to search for specific characteristic fragmentation patterns in large data sets and highlight the corresponding metabolites. To this end, MFQL files were written to dereplicate common phenolics occurring in plant extracts. Complementary MFQL files were used for validation purposes. New MFQL files with molecular formula restrictions for common classes of phenolic natural products were generated for the metabolite profiling of different representative crude plant extracts. This method was evaluated against an open-source software for mass-spectrometry data processing (MZMine®) and against manual annotation based on published data. The targeted LipidXplorer method implemented using common phenolic fragmentation patterns, was found to be able to annotate more phenolics than MZMine® that is based on automated queries on the available databases. Additionally, screening for ascarosides, natural products with unrelated structures to plant phenolics collected from the nematode Caenorhabditis elegans, demonstrated the specificity of this method by cross-testing both groups of chemicals in both plants and nematodes. Copyright © 2018 John Wiley & Sons, Ltd.

  18. 76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling

    Science.gov (United States)

    2011-07-21

    ... list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010... files from Office 2007 or Office 2010 in an Office 2003 format prior to submission. Dated: July 15, 2011...

  19. UPIN Group File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Group Unique Physician Identifier Number (UPIN) File is the business entity file that contains the group practice UPIN and descriptive information. It does NOT...

  20. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    Science.gov (United States)

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  1. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  2. 12 CFR 5.4 - Filing required.

    Science.gov (United States)

    2010-01-01

    ... CORPORATE ACTIVITIES Rules of General Applicability § 5.4 Filing required. (a) Filing. A depository institution shall file an application or notice with the OCC to engage in corporate activities and... advise an applicant through a pre-filing communication to send the filing or submission directly to the...

  3. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    Science.gov (United States)

    Damm, Bodo; Klose, Martin

    2014-05-01

    This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system

  4. Intelligent Information Retrieval for a Multimedia Database Using Captions

    Science.gov (United States)

    1992-07-23

    F-14 Collecting file F-14A Collecting file F- 3D Collecting file F-3H Collecting file F-4B Collecting file F-4G Collecting file F-4J Collecting file F...Collecting file AC-130 Collecting file ATB Collecting file AV-8 Collecting file C-130 Collecting file F-15 Collecting file F-3 Collecting file F- 3D -1...Jacobs and Zernik (1988) have proposed a gradual method to defining new words for the lexicon by analyzing a sequence of example text, making

  5. Huygens file service and storage architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  6. Huygens File Service and Storage Architecture

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Stabell-Kulo, Tage; Stabell-Kulo, Tage

    1993-01-01

    The Huygens file server is a high-performance file server which is able to deliver multi-media data in a timely manner while also providing clients with ordinary “Unix” like file I/O. The file server integrates client machines, file servers and tertiary storage servers in the same storage

  7. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  8. Development of a biomarkers database for the National Children's Study

    Energy Technology Data Exchange (ETDEWEB)

    Lobdell, Danelle T [US Environmental Protection Agency, Office of Research and Development, National Health and Environmental Effects Research Laboratory, Human Studies Division, Epidemiology and Biomarkers Branch, MD 58A, Research Triangle Park, NC 27711 (United States); Mendola, Pauline [US Environmental Protection Agency, Office of Research and Development, National Health and Environmental Effects Research Laboratory, Human Studies Division, Epidemiology and Biomarkers Branch, MD 58A, Research Triangle Park, NC 27711 (United States)

    2005-08-07

    The National Children's Study (NCS) is a federally-sponsored, longitudinal study of environmental influences on the health and development of children across the United States (www.nationalchildrensstudy.gov). Current plans are to study approximately 100,000 children and their families beginning before birth up to age 21 years. To explore potential biomarkers that could be important measurements in the NCS, we compiled the relevant scientific literature to identify both routine or standardized biological markers as well as new and emerging biological markers. Although the search criteria encouraged examination of factors that influence the breadth of child health and development, attention was primarily focused on exposure, susceptibility, and outcome biomarkers associated with four important child health outcomes: autism and neurobehavioral disorders, injury, cancer, and asthma. The Biomarkers Database was designed to allow users to: (1) search the biomarker records compiled by type of marker (susceptibility, exposure or effect), sampling media (e.g., blood, urine, etc.), and specific marker name; (2) search the citations file; and (3) read the abstract evaluations relative to our search criteria. A searchable, user-friendly database of over 2000 articles was created and is publicly available at: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85844. PubMed was the primary source of references with some additional searches of Toxline, NTIS, and other reference databases. Our initial focus was on review articles, beginning as early as 1996, supplemented with searches of the recent primary research literature from 2001 to 2003. We anticipate this database will have applicability for the NCS as well as other studies of children's environmental health.

  9. Development of a biomarkers database for the National Children's Study

    International Nuclear Information System (INIS)

    Lobdell, Danelle T.; Mendola, Pauline

    2005-01-01

    The National Children's Study (NCS) is a federally-sponsored, longitudinal study of environmental influences on the health and development of children across the United States (www.nationalchildrensstudy.gov). Current plans are to study approximately 100,000 children and their families beginning before birth up to age 21 years. To explore potential biomarkers that could be important measurements in the NCS, we compiled the relevant scientific literature to identify both routine or standardized biological markers as well as new and emerging biological markers. Although the search criteria encouraged examination of factors that influence the breadth of child health and development, attention was primarily focused on exposure, susceptibility, and outcome biomarkers associated with four important child health outcomes: autism and neurobehavioral disorders, injury, cancer, and asthma. The Biomarkers Database was designed to allow users to: (1) search the biomarker records compiled by type of marker (susceptibility, exposure or effect), sampling media (e.g., blood, urine, etc.), and specific marker name; (2) search the citations file; and (3) read the abstract evaluations relative to our search criteria. A searchable, user-friendly database of over 2000 articles was created and is publicly available at: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85844. PubMed was the primary source of references with some additional searches of Toxline, NTIS, and other reference databases. Our initial focus was on review articles, beginning as early as 1996, supplemented with searches of the recent primary research literature from 2001 to 2003. We anticipate this database will have applicability for the NCS as well as other studies of children's environmental health

  10. Personal Databases: Of Filing Cabinets and Idiosyncrasy [and] Library Automation: An Overview of the Market.

    Science.gov (United States)

    Molholt, Pat; McDonald, David R.

    1989-01-01

    The first of two articles describes how a team effort by computing centers and academic libraries could aid faculty in the organization of their personal databases. The second provides an overview of the academic library automation market, identifying vendors active in the market and trends of recent years. (CLB)

  11. Development of the Database for Environmental Sound Research and Application (DESRA: Design, Functionality, and Retrieval Considerations

    Directory of Open Access Journals (Sweden)

    Brian Gygi

    2010-01-01

    Full Text Available Theoretical and applied environmental sounds research is gaining prominence but progress has been hampered by the lack of a comprehensive, high quality, accessible database of environmental sounds. An ongoing project to develop such a resource is described, which is based upon experimental evidence as to the way we listen to sounds in the world. The database will include a large number of sounds produced by different sound sources, with a thorough background for each sound file, including experimentally obtained perceptual data. In this way DESRA can contain a wide variety of acoustic, contextual, semantic, and behavioral information related to an individual sound. It will be accessible on the Internet and will be useful to researchers, engineers, sound designers, and musicians.

  12. Apically extruded dentin debris by reciprocating single-file and multi-file rotary system.

    Science.gov (United States)

    De-Deus, Gustavo; Neves, Aline; Silva, Emmanuel João; Mendonça, Thais Accorsi; Lourenço, Caroline; Calixto, Camila; Lima, Edson Jorge Moreira

    2015-03-01

    This study aims to evaluate the apical extrusion of debris by the two reciprocating single-file systems: WaveOne and Reciproc. Conventional multi-file rotary system was used as a reference for comparison. The hypotheses tested were (i) the reciprocating single-file systems extrude more than conventional multi-file rotary system and (ii) the reciprocating single-file systems extrude similar amounts of dentin debris. After solid selection criteria, 80 mesial roots of lower molars were included in the present study. The use of four different instrumentation techniques resulted in four groups (n = 20): G1 (hand-file technique), G2 (ProTaper), G3 (WaveOne), and G4 (Reciproc). The apparatus used to evaluate the collection of apically extruded debris was typical double-chamber collector. Statistical analysis was performed for multiple comparisons. No significant difference was found in the amount of the debris extruded between the two reciprocating systems. In contrast, conventional multi-file rotary system group extruded significantly more debris than both reciprocating groups. Hand instrumentation group extruded significantly more debris than all other groups. The present results yielded favorable input for both reciprocation single-file systems, inasmuch as they showed an improved control of apically extruded debris. Apical extrusion of debris has been studied extensively because of its clinical relevance, particularly since it may cause flare-ups, originated by the introduction of bacteria, pulpal tissue, and irrigating solutions into the periapical tissues.

  13. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  14. 76 FR 52323 - Combined Notice of Filings; Filings Instituting Proceedings

    Science.gov (United States)

    2011-08-22

    .... Applicants: Young Gas Storage Company, Ltd. Description: Young Gas Storage Company, Ltd. submits tariff..., but intervention is necessary to become a party to the proceeding. The filings are accessible in the.... More detailed information relating to filing requirements, interventions, protests, and service can be...

  15. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  16. A list of image files of planarians analyzed by in situ hybridication and immunohistochemical staining - Plabrain DB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Plabrain DB A list of image files of planarians analyzed by in situ hybridication and immunohistochemical...tu hybridication and also protein distribution by immunohistochemical staining in intact planarians or plana...planarians analyzed by In situ hybridication and immunohistochemical staining . D..._image#en Data acquisition method Whole-mount in situ hybridication, immunohistochemical...te Policy | Contact Us A list of image files of planarians analyzed by in situ hybridication and immunohistochemical staining - Plabrain DB | LSDB Archive ...

  17. First progress report on the Japan Endoscopy Database project.

    Science.gov (United States)

    Kodashima, Shinya; Tanaka, Kiyohito; Matsuda, Koji; Fujishiro, Mitsuhiro; Saito, Yutaka; Ohtsuka, Kazuo; Oda, Ichiro; Katada, Chikatoshi; Kato, Masayuki; Kida, Mitsuhiro; Kobayashi, Kiyonori; Hoteya, Shu; Horimatsu, Takahiro; Matsuda, Takahisa; Muto, Manabu; Yamamoto, Hironori; Ryozawa, Shomei; Iwakiri, Ryuichi; Kutsumi, Hiromu; Miyata, Hiroaki; Kato, Mototsugu; Haruma, Ken; Fujimoto, Kazuma; Uemura, Naomi; Kaminishi, Michio; Tajiri, Hisao

    2018-01-01

    The Japan Endoscopy Database (JED) Project was started to develop the world's largest endoscopic database, capture the actual performance of endoscopic practice, and standardize the terminology and fundamental items needed for a clinical and research registry. This paper presents a progress report on the first phase of this project undertaken at eight endoscopic centers in Japan. The list of data items to be collected was drafted by the MSED-J (Minimal Standard Endoscopic Database) subcommittee. These items were aggregated offline by integrating data from two endoscopic filing systems between July 2015 and December 2015. The study population included all patients who underwent esophagogastroduodenoscopy or colonoscopy at all eight centers, patients who underwent enteroscopy at five of the eight centers, and patients who underwent endoscopic retrograde cholangiopancreatography (ERCP) at four of the eight centers. Data collected in this phase included 61 070 endoscopic procedures, of which 40 475 were esophagogastroduodenoscopies, 215 were enteroscopies, 19 204 were colonoscopies, and 1176 were ERCPs. Frequencies of complications were 0.68% for esophagogastroduodenoscopy, 0% for enteroscopy, 0.43% for colonoscopy, and 13.34% for ERCP. In addition, we obtained various data including Helicobacter pylori infection status, past history of endoscopy in patients who underwent enteroscopy or colonoscopy, and degree of difficulty of ERCP, although the frequencies of reporting were sometimes low, with some items <20%. Results of the first phase suggest that the JED project can provide vast quantities of useful data about endoscopic procedures. © 2017 Japan Gastroenterological Endoscopy Society.

  18. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  19. Evaluated neutronic file for indium

    International Nuclear Information System (INIS)

    Smith, A.B.; Chiba, S.; Smith, D.L.; Meadows, J.W.; Guenther, P.T.; Lawson, R.D.; Howerton, R.J.

    1990-01-01

    A comprehensive evaluated neutronic data file for elemental indium is documented. This file, extending from 10 -5 eV to 20 MeV, is presented in the ENDF/B-VI format, and contains all neutron-induced processes necessary for the vast majority of neutronic applications. In addition, an evaluation of the 115 In(n,n') 116m In dosimetry reaction is presented as a separate file. Attention is given in quantitative values, with corresponding uncertainty information. These files have been submitted for consideration as a part of the ENDF/B-VI national evaluated-file system. 144 refs., 10 figs., 4 tabs

  20. FHEO Filed Cases

    Data.gov (United States)

    Department of Housing and Urban Development — The dataset is a list of all the Title VIII fair housing cases filed by FHEO from 1/1/2007 - 12/31/2012 including the case number, case name, filing date, state and...