WorldWideScience

Sample records for subscription databases performed

  1. Further Analysis of Boiling Points of Small Molecules, CH[subscript w]F[subscript x]Cl[subscript y]Br[subscript z

    Science.gov (United States)

    Beauchamp, Guy

    2005-01-01

    A study to present specific hypothesis that satisfactorily explain the boiling point of a number of molecules, CH[subscript w]F[subscript x]Cl[subscript y]Br[subscript z] having similar structure, and then analyze the model with the help of multiple linear regression (MLR), a data analysis tool. The MLR analysis was useful in selecting the…

  2. Optimization Performance of a CO[subscript 2] Pulsed Tuneable Laser

    Science.gov (United States)

    Ribeiro, J. H. F.; Lobo, R. F. M.

    2009-01-01

    In this paper, a procedure is presented that will allow (i) the power and (ii) the energy of a pulsed and tuneable TEA CO[subscript 2] laser to be optimized. This type of laser represents a significant improvement in performance and portability. Combining a pulse mode with a grating tuning facility, it enables us to scan the working wavelength…

  3. Specialist Bibliographic Databases

    OpenAIRE

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A.; Trukhachev, Vladimir I.; Kostyukova, Elena I.; Gerasimov, Alexey N.; Kitas, George D.

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and d...

  4. Subscriptions

    African Journals Online (AJOL)

    ₤36.30 less 10% to Subscription Agent (₤32.67 + ₤9.33 Forex charges = ₤42.00 to ISEA, Rhodes University). $57.48 less 10% to Subscription Agent ($51.73 + $18.27 Forex charges = $70.00 to ISEA, Rhodes University). Airmail: Please add a further £9/$16. Africa: Individuals: R230.00 less 10% to Agent - (R 207.00 incl.

  5. Vibrational Spectroscopy of the CCI[subscript 4]?[subscript 1] Mode: Effect of Thermally Populated Vibrational States

    Science.gov (United States)

    Gaynor, James D.; Wetterer, Anna M.; Cochran, Rea M.; Valente, Edward J.; Mayer, Steven G.

    2015-01-01

    In our previous article on CCl[subscript 4] in this "Journal," we presented an investigation of the fine structure of the symmetric stretch of carbon tetrachloride (CCl[subscript 4]) due to isotopic variations of chlorine in C[superscript 35]Cl[subscript x][superscript 37]Cl[subscript 4-x]. In this paper, we present an investigation of…

  6. Sustainable funding for biocuration: The Arabidopsis Information Resource (TAIR) as a case study of a subscription-based funding model.

    Science.gov (United States)

    Reiser, Leonore; Berardini, Tanya Z; Li, Donghui; Muller, Robert; Strait, Emily M; Li, Qian; Mezheritsky, Yarik; Vetushko, Andrey; Huala, Eva

    2016-01-01

    Databases and data repositories provide essential functions for the research community by integrating, curating, archiving and otherwise packaging data to facilitate discovery and reuse. Despite their importance, funding for maintenance of these resources is increasingly hard to obtain. Fueled by a desire to find long term, sustainable solutions to database funding, staff from the Arabidopsis Information Resource (TAIR), founded the nonprofit organization, Phoenix Bioinformatics, using TAIR as a test case for user-based funding. Subscription-based funding has been proposed as an alternative to grant funding but its application has been very limited within the nonprofit sector. Our testing of this model indicates that it is a viable option, at least for some databases, and that it is possible to strike a balance that maximizes access while still incentivizing subscriptions. One year after transitioning to subscription support, TAIR is self-sustaining and Phoenix is poised to expand and support additional resources that wish to incorporate user-based funding strategies. Database URL: www.arabidopsis.org. © The Author(s) 2016. Published by Oxford University Press.

  7. Power Subscription Strategy

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1998-12-21

    This document lays out the Bonneville Power Administration`s Power Subscription Strategy, a process that will enable the people of the Pacific Northwest to share the benefits of the Federal Columbia river Power System after 2001 while retaining those benefits within the region for future generations. The strategy also addresses how those who receive the benefits of the region`s low-cost federal power should share a corresponding measure of the risks. This strategy seeks to implement the subscription concept created by the Comprehensive Review in 1996 through contracts for the sale of power and the distribution of federal power benefits in the deregulated wholesale electricity market. The success of the subscription process is fundamental to BPA`s overall business purpose to provide public benefits to the Northwest through commercially successful businesses.

  8. Nitration of Phenols Using Cu(NO[subscript 3])[subscript 2]: Green Chemistry Laboratory Experiment

    Science.gov (United States)

    Yadav, Urvashi; Mande, Hemant; Ghalsasi, Prasanna

    2012-01-01

    An easy-to-complete, microwave-assisted, green chemistry, electrophilic nitration method for phenol using Cu(NO[subscript 3])[subscript 2] in acetic acid is discussed. With this experiment, students clearly understand the mechanism underlying the nitration reaction in one laboratory session. (Contains 4 schemes.)

  9. Subscriptions

    African Journals Online (AJOL)

    Subscriptions Contact. Dr Basil C Ezeanolue Department of Otorhinolaryngology College of Medicine University of Nigeria Teaching Hospital Enugu 400 001. Nigeria Email: editornjorl@yahoo.com. Individuals Nigeria - 600.00 Naira Africa - US$15.00. Other Countries - US$25.00. Institutions Nigeria - 1200.00 Naira

  10. 47 CFR 73.641 - Subscription TV definitions.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Subscription TV definitions. 73.641 Section 73.641 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES Television Broadcast Stations § 73.641 Subscription TV definitions. (a) Subscription...

  11. Subscriptions

    African Journals Online (AJOL)

    Subscriptions Contact. Professor Dele Braimoh P.O. Box 392. UNISA 0003. Pretoria South Africa Email: dbraimoh@yahoo.com. The AJCPSF requires a token payment of N5000 (excluding Review fee) from Nigerian and $150 ( International) contributors for maintaining the initial cost of editorial works on submitted articles ...

  12. Subscriptions

    African Journals Online (AJOL)

    US$80 per volume (Institutional) US$45 per volume (Individuals) Single copies: US$35. Southern African Subscriptions: R70 per volume (Institutional) R50 per volume (Individuals) Single copies: R30 European sales from Intervention Press, Castenschioldsvej 7, DK 8270 Hojbjerg, Denmark. E-mail: interven@inet.uni2.dk.

  13. subscription information

    Indian Academy of Sciences (India)

    Administrator

    PROCEEDINGS IN ENGINEERING JOURNAL OF. SCIENCE EDUCATION SADHANA (PROCEEDINGS. ENGINEERING SCIENCES) SUBSCRIPTION JOUR-. NAL OF ASTROPHYSICS AND ASTRONOMY. JOURNAL OF BIOSCIENCES JOURNAL OF. CHEMICAL SCIENCES JOURNAL OF EARTH SYS. No. Journal.

  14. Google Scholar Out-Performs Many Subscription Databases when Keyword Searching. A Review of: Walters, W. H. (2009. Google Scholar search performance: Comparative recall and precision. portal: Libraries and the Academy, 9(1, 5-24.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2010-09-01

    title search in Google Scholar using the same keywords, elderly and migration. Compared to the standard search on the same topic, there was almost no difference in recall or precision when a title search was performed and the first 50 results were viewed.Conclusion – Database search performance differs significantly from one field to another so that a comparative study using a different search topic might produce different search results from those summarized above. Nevertheless, Google Scholar out-performs many subscription databases – in terms of recall and precision – when using keyword searches for some topics, as was the case for the multidisciplinary topic of later-life migration. Google Scholar’s recall and precision rates were high within the first 10 to 100 search results examined. According to the author, “these findings suggest that a searcher who is unwilling to search multiple databases or to adopt a sophisticated search strategy is likely to achieve better than average recall and precision by using Google Scholar” (p. 16.The author concludes the paper by discussing the relevancy of search results obtained by undergraduate students. All of the 155 relevant journal articles on the topic of later-life migration were pre-selected based on an expert critique of the complete articles, rather than by looking at only the titles or abstracts of references as most searchers do. Instructors and librarians may wish to support the use of databases that increase students’ contact with high-quality research documents (i.e.., documents that are authoritative, well written, contain a strong analysis, or demonstrate quality in other ways. The study’s findings indicate that Google Scholar is an example of one such database, since it obtained a large number of references to the relevant papers on the topic searched.

  15. Subscriptions

    African Journals Online (AJOL)

    Subscriptions Contact. Rachel Rege Kenya Agricultural Research Institute, P O Box 57811, Nairobi, KENYA Email: rrege@kari.org. East Africa Kssh 4000; Sterling 40; USA $62; including postage by surface mail. Postage by air mail can be arranged at cost on request. Payments to be made to the Editor, East African ...

  16. L[subscript 1] and L[subscript 2] Spoken Word Processing: Evidence from Divided Attention Paradigm

    Science.gov (United States)

    Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour

    2016-01-01

    The present study aims to reveal some facts concerning first language (L[subscript 1]) and second language (L[subscript 2]) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of…

  17. Improved Syntheses and Expanded Analyses of the Enantiomerically Enriched Chiral Cobalt Complexes Co(en)[subscript 3]I[subscript 3] and Co(diNOsar)Br[subscript 3

    Science.gov (United States)

    McClellan, Michael J.; Cass, Marion E.

    2015-01-01

    This communication is a collection of additions and modifications to two previously published classic inorganic synthesis laboratory experiments. The experimental protocol for the synthesis and isolation of enantiomerically enriched ?- (or ?-)Co(en)[subscript 3]I[subscript 3] has been modified to increase reproducibility, yield, and enantiomeric…

  18. Evaluation of Maximal O[subscript 2] Uptake with Undergraduate Students at the University of La Reunion

    Science.gov (United States)

    Tarnus, Evelyne; Catan, Aurelie; Verkindt, Chantal; Bourdon, Emmanuel

    2011-01-01

    The maximal rate of O[subscript 2] consumption (VO[subscript 2max]) constitutes one of the oldest fitness indexes established for the measure of cardiorespiratory fitness and aerobic performance. Procedures have been developed in which VO[subscript 2max]is estimated from physiological responses during submaximal exercise. Generally, VO[subscript…

  19. Utilization of office computer for journal subscription

    International Nuclear Information System (INIS)

    Yonezawa, Minoru; Shimizu, Tokiyo

    1993-01-01

    Integrated library automation system has been operating in Japan Atomic Energy Research Institute library, which consists of three subsystems for serials control, book acquisition and circulation control. Functions of the serials control subsystem have been improved to reduce the work load of journal subscription work. Subscription price(both in foreign currency and Japanese yen), conversion factor, foreign currency exchange rate are newly introduced as data elements to a master file for automatic calculation and totalization in the system, e.g. conversion of subscription price from foreign currency into Japanese yen. Some kinds of journal lists are also printed out, such as journal subscription list, journal distribution list for each laboratory, etc. (author)

  20. Database in Artificial Intelligence.

    Science.gov (United States)

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  1. Using a Subscription Agent for E-Journal Management

    Science.gov (United States)

    Grogg, Jill E.

    2010-01-01

    Subscription agents have had to reinvent themselves over the past 15 years as the numbers of print subscriptions have dramatically dwindled. Many libraries have chosen to bypass the subscription agent and its extra fees in favor of dealing directly with the publisher for e-journal and e-journal package procurement and management. Especially in…

  2. Managing Distributed Systems with Smart Subscriptions

    Science.gov (United States)

    Filman, Robert E.; Lee, Diana D.; Swanson, Keith (Technical Monitor)

    2000-01-01

    We describe an event-based, publish-and-subscribe mechanism based on using 'smart subscriptions' to recognize weakly-structured events. We present a hierarchy of subscription languages (propositional, predicate, temporal and agent) and algorithms for efficiently recognizing event matches. This mechanism has been applied to the management of distributed applications.

  3. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  4. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  5. OAS :: Email subscriptions

    Science.gov (United States)

    subscriptions Videos Photos Live Webcast Social Media Facebook @oasofficial Facebook Twitter @oas_official Webcast Videos Audios Photos Social Media Facebook Twitter Newsletters Press and Communications Department Rights Actions against Corruption C Children Civil Registry Civil Society Contact Us Culture Cyber

  6. Internet Access from CERN GSM subscriptions

    CERN Multimedia

    IT Department

    2008-01-01

    The data service on GSM subscriptions has been improved, allowing CERN users to access the Internet directly. A CERN GSM subscription with data option now allows you to connect to the Internet from a mobile phone or a PC equipped with a GSM modem. The previous access (CERN intranet) still exists. To get access to the new service, you will find all the information on configurations at: http://cern.ch/gprs The use of this service on the Sunrise network is charged on a flat-rate basis (no extra charge related to the volume of downloaded data). Depending on your CERN subscription type (standard or master), you can also connect to foreign GSM data networks (roaming), but this is strongly discouraged, except where absolutely necessary, due to international roaming charges. Telecom Section, IT/CS

  7. K[subscript a] and K[subscript b] from pH and Conductivity Measurements: A General Chemistry Laboratory Exercise

    Science.gov (United States)

    Nyasulu, Frazier; Moehring, Michael; Arthasery, Phyllis; Barlag, Rebecca

    2011-01-01

    The acid ionization constant, K[subscript a], of acetic acid and the base ionization constant, K[subscript b], of ammonia are determined easily and rapidly using a datalogger, a pH sensor, and a conductivity sensor. To decrease sample preparation time and to minimize waste, sequential aliquots of a concentrated standard are added to a known volume…

  8. Open access versus subscription journals: a comparison of scientific impact.

    Science.gov (United States)

    Björk, Bo-Christer; Solomon, David

    2012-07-17

    In the past few years there has been an ongoing debate as to whether the proliferation of open access (OA) publishing would damage the peer review system and put the quality of scientific journal publishing at risk. Our aim was to inform this debate by comparing the scientific impact of OA journals with subscription journals, controlling for journal age, the country of the publisher, discipline and (for OA publishers) their business model. The 2-year impact factors (the average number of citations to the articles in a journal) were used as a proxy for scientific impact. The Directory of Open Access Journals (DOAJ) was used to identify OA journals as well as their business model. Journal age and discipline were obtained from the Ulrich's periodicals directory. Comparisons were performed on the journal level as well as on the article level where the results were weighted by the number of articles published in a journal. A total of 610 OA journals were compared with 7,609 subscription journals using Web of Science citation data while an overlapping set of 1,327 OA journals were compared with 11,124 subscription journals using Scopus data. Overall, average citation rates, both unweighted and weighted for the number of articles per journal, were about 30% higher for subscription journals. However, after controlling for discipline (medicine and health versus other), age of the journal (three time periods) and the location of the publisher (four largest publishing countries versus other countries) the differences largely disappeared in most subcategories except for journals that had been launched prior to 1996. OA journals that fund publishing with article processing charges (APCs) are on average cited more than other OA journals. In medicine and health, OA journals founded in the last 10 years are receiving about as many citations as subscription journals launched during the same period. Our results indicate that OA journals indexed in Web of Science and/or Scopus are

  9. Power Subscription Strategy: Administrator`s Record of Decision.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration

    1998-12-01

    The Bonneville Power Administration (BPA) has decided to adopt a Power Subscription Strategy for entering into new power sales contracts with its Pacific Northwest customers. The Strategy equitably distributes the electric power generated by the Federal Columbia River Power System (FCRPS) within the framework of existing law. The Power Subscription Strategy addresses the availability of power; describes power products; lays out strategies for pricing, including risk management; and discusses contract elements. In proceeding with this Subscription Strategy, BPA is guided by and committed to the Fish and Wildlife funding Principles for the BPA announced by the Vice President of the US in September 1998. This Record of Decision (ROD) addresses the issues raised by commenters who responded to BPA`s Power Subscription Strategy Proposal during and after the comment period that began with the release of the Proposal on September 18, 1998. The ROD is organized in approximately the same way as the Proposal and the Power Subscription Strategy that BPA developed based on the comments received. Abbreviations of party names used in citations appear in the section just preceding this introduction; a list of all the commenters follows the text of the ROD.

  10. 46 CFR 201.42 - Subscription, authentication of documents.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Subscription, authentication of documents. 201.42 Section 201.42 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION POLICY, PRACTICE AND... Subscription, authentication of documents. (a) Documents filed shall be subscribed: (1) By the person or...

  11. 41 CFR 101-25.108 - Multiyear subscriptions for publications.

    Science.gov (United States)

    2010-07-01

    ... for publications. 101-25.108 Section 101-25.108 Public Contracts and Property Management Federal...-GENERAL 25.1-General Policies § 101-25.108 Multiyear subscriptions for publications. Subscriptions for periodicals, newspapers, and other publications for which it is known in advance that a continuing requirement...

  12. Managing without a subscription agent: the experience of doing it yourself

    Directory of Open Access Journals (Sweden)

    Lisa Lovén

    2017-11-01

    Full Text Available In October 2015 Stockholm University Library (SUB decided to no longer use the services of a subscription agent for managing individual journal subscriptions. Instead, SUB has taken a do-it-yourself (DIY approach to subscriptions management and now renews and orders new journals directly from each publisher. In the light of two years of experience, this article discusses the key findings of this new way of working with subscriptions, the differences between the first and second year of renewing directly with publishers and the pros and cons of not using an agent. The article ends with a few recommendations and things for other libraries to consider before making the decision to do without a subscription agent and explains why SUB has decided to continue with the DIY approach. 'Based on a breakout session presented at the 40th UKSG Annual Conference, Harrogate, April 2017 '

  13. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  14. Anon-Pass: Practical Anonymous Subscriptions.

    Science.gov (United States)

    Lee, Michael Z; Dunn, Alan M; Katz, Jonathan; Waters, Brent; Witchel, Emmett

    2013-12-31

    We present the design, security proof, and implementation of an anonymous subscription service. Users register for the service by providing some form of identity, which might or might not be linked to a real-world identity such as a credit card, a web login, or a public key. A user logs on to the system by presenting a credential derived from information received at registration. Each credential allows only a single login in any authentication window, or epoch . Logins are anonymous in the sense that the service cannot distinguish which user is logging in any better than random guessing. This implies unlinkability of a user across different logins. We find that a central tension in an anonymous subscription service is the service provider's desire for a long epoch (to reduce server-side computation) versus users' desire for a short epoch (so they can repeatedly "re-anonymize" their sessions). We balance this tension by having short epochs, but adding an efficient operation for clients who do not need unlinkability to cheaply re-authenticate themselves for the next time period. We measure performance of a research prototype of our protocol that allows an independent service to offer anonymous access to existing services. We implement a music service, an Android-based subway-pass application, and a web proxy, and show that adding anonymity adds minimal client latency and only requires 33 KB of server memory per active user.

  15. Measurement of Levitation Forces of High-"T[subscript c] Superconductors

    Science.gov (United States)

    Becker, M.; Koblischka, M. R.; Hartmann, U.

    2010-01-01

    We show the construction of a so-called levitation balance which is capable of measuring the levitation forces between a permanent magnet and a superconducting high-T[subscript c] thin film sample. The underlying theoretical basis is discussed in detail. The experiment is performed as an introductory physics experiment for school students as well…

  16. Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?

    Science.gov (United States)

    Riley, Cheryl; Wales, Barbara

    Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…

  17. Synthesis of "trans"-4,5-Bis-dibenzylaminocyclopent-2-Enone from Furfural Catalyzed by ErCl[subscript 3]·6H[subscript 2]O

    Science.gov (United States)

    Estevão, Mónica S.; Martins, Ricardo J. V.; Alfonso, Carlos A. M.

    2017-01-01

    An experiment exploring the chemistry of the carbonyl group for the one-step synthesis of "trans"-4,5- dibenzylaminocyclopent-2-enone is described. The reaction of furfural and dibenzylamine in the environmentally friendly solvent ethanol and catalyzed by the Lewis acid ErCl[subscript 3]·6H[subscript 2]O afforded the product in high…

  18. Food advertising on children's popular subscription television channels in Australia.

    Science.gov (United States)

    Hebden, Lana; King, Lesley; Chau, Josephine; Kelly, Bridget

    2011-04-01

    Trends on Australian free-to-air television show children continue to be exposed to a disproportionate amount of unhealthy food advertising. This study describes the nature and extent of food marketing on the Australian subscription television channels most popular with children. Advertisements broadcast on the six subscription television channels most popular with children were recorded over four days in February 2009. Advertised foods were coded as core/healthy, non-core/unhealthy or miscellaneous/other, and for persuasive marketing techniques (promotional characters, premium offers and nutrition claims). The majority of foods advertised were non-core (72%), with a mean rate of 0.7 non-core food advertisements broadcast per hour, per channel. The frequency of non-core food advertisements differed significantly across channels. Persuasive techniques were used to advertise non-core foods less frequently than core and miscellaneous foods. Non-core foods make up the majority of foods advertised on children's popular subscription channels. However, Australian children currently view less non-core food advertising on subscription television compared with free-to-air. Unlike free-to-air television, subscription services have the unique opportunity to limit inappropriate food marketing to children, given they are less reliant on advertising revenue. © 2011 The Authors. ANZJPH © 2011 Public Health Association of Australia.

  19. Human Performance Event Database

    International Nuclear Information System (INIS)

    Trager, E. A.

    1998-01-01

    The purpose of this paper is to describe several aspects of a Human Performance Event Database (HPED) that is being developed by the Nuclear Regulatory Commission. These include the background, the database structure and basis for the structure, the process for coding and entering event records, the results of preliminary analyses of information in the database, and plans for the future. In 1992, the Office for Analysis and Evaluation of Operational Data (AEOD) within the NRC decided to develop a database for information on human performance during operating events. The database was needed to help classify and categorize the information to help feedback operating experience information to licensees and others. An NRC interoffice working group prepared a list of human performance information that should be reported for events and the list was based on the Human Performance Investigation Process (HPIP) that had been developed by the NRC as an aid in investigating events. The structure of the HPED was based on that list. The HPED currently includes data on events described in augmented inspection team (AIT) and incident investigation team (IIT) reports from 1990 through 1996, AEOD human performance studies from 1990 through 1993, recent NRR special team inspections, and licensee event reports (LERs) that were prepared for the events. (author)

  20. CERN GSM SUBSCRIPTIONS

    CERN Multimedia

    Labo Telecom

    2002-01-01

    AS Division has created a new EDH document for handling all GSM subscription requests and amendments. This procedure will enter force immediately and from now on the Labo Telecom stores will no longer be able to deal with requests submitted on paper forms. Detailed information on the subject can be found here and the Labo Telecom stores will continue to open every day between 11.00 a.m. and 12.00 midday. IT-CS-TEL, Labo Telecom

  1. Regionally Selective Requirement for D[subscript 1]/D[subscript 5] Dopaminergic Neurotransmission in the Medial Prefrontal Cortex in Object-in-Place Associative Recognition Memory

    Science.gov (United States)

    Savalli, Giorgia; Bashir, Zafar I.; Warburton, E. Clea

    2015-01-01

    Object-in-place (OiP) memory is critical for remembering the location in which an object was last encountered and depends conjointly on the medial prefrontal cortex, perirhinal cortex, and hippocampus. Here we examined the role of dopamine D[subscript 1]/D[subscript 5] receptor neurotransmission within these brain regions for OiP memory. Bilateral…

  2. Open Access, Library Subscriptions, and Article Processing Charges

    KAUST Repository

    Vijayakumar, J.K.

    2016-05-01

    Hybrid journals contains articles behind a pay-wall to be subscribed, as well as papers made open access when author pays article processing charge (APC). In such cases, an Institution will end up paying twice and Publishers tend to double-dip. Discussions and pilot models are emerging on pricing options, such as “offset pricing,” [where APCs are adjusted or discounted with subscription costs as vouchers or reductions in next year subscriptions, APCs beyond the subscription costs are modestly capped etc] and thus reduce Institutions’ cost. This presentation will explain different models available and how can we attain a transparent costing structure, where the scholarly community can feel the fairness in Publishers’ pricing mechanisms. Though most of the offset systems are developed through national level or consortium level negotiations, experience of individual institutions, like KAUST that subscribe to large e-journals collections, is important in making right decisions on saving Institutes costs and support openness in scholarly communications.

  3. Open Access, Library Subscriptions, and Article Processing Charges

    KAUST Repository

    Vijayakumar, J.K.; Tamarkin, Molly

    2016-01-01

    Hybrid journals contains articles behind a pay-wall to be subscribed, as well as papers made open access when author pays article processing charge (APC). In such cases, an Institution will end up paying twice and Publishers tend to double-dip. Discussions and pilot models are emerging on pricing options, such as “offset pricing,” [where APCs are adjusted or discounted with subscription costs as vouchers or reductions in next year subscriptions, APCs beyond the subscription costs are modestly capped etc] and thus reduce Institutions’ cost. This presentation will explain different models available and how can we attain a transparent costing structure, where the scholarly community can feel the fairness in Publishers’ pricing mechanisms. Though most of the offset systems are developed through national level or consortium level negotiations, experience of individual institutions, like KAUST that subscribe to large e-journals collections, is important in making right decisions on saving Institutes costs and support openness in scholarly communications.

  4. OLIO+: an osteopathic medicine database.

    Science.gov (United States)

    Woods, S E

    1991-01-01

    OLIO+ is a bibliographic database designed to meet the information needs of the osteopathic medical community. Produced by the American Osteopathic Association (AOA), OLIO+ is devoted exclusively to the osteopathic literature. The database is available only by subscription through AOA and may be accessed from any data terminal with modem or IBM-compatible personal computer with telecommunications software that can emulate VT100 or VT220. Apple access is also available, but some assistance from OLIO+ support staff may be necessary to modify the Apple keyboard.

  5. Vibrational Spectroscopy of the CCl[subscript 4] v[subscript 1] Mode: Theoretical Prediction of Isotopic Effects

    Science.gov (United States)

    Gaynor, James D.; Wetterer, Anna M.; Cochran, Rea M.; Valente, Edward J.; Mayer, Steven G.

    2015-01-01

    Raman spectroscopy is a powerful experimental technique, yet it is often missing from the undergraduate physical chemistry laboratory curriculum. Tetrachloromethane (CCl[subscript 4]) is the ideal molecule for an introductory vibrational spectroscopy experiment and the symmetric stretch vibration contains fine structure due to isotopic variations…

  6. Patterns of Undergraduates' Use of Scholarly Databases in a Large Research University

    Science.gov (United States)

    Mbabu, Loyd Gitari; Bertram, Albert; Varnum, Ken

    2013-01-01

    Authentication data was utilized to explore undergraduate usage of subscription electronic databases. These usage patterns were linked to the information literacy curriculum of the library. The data showed that out of the 26,208 enrolled undergraduate students, 42% of them accessed a scholarly database at least once in the course of the entire…

  7. PostgreSQL database performance optimization

    OpenAIRE

    Wang, Qiang

    2011-01-01

    The thesis was request by Marlevo software Oy for a general description of the PostgreSQL database and its performance optimization technics. Its purpose was to help new PostgreSQL users to quickly understand the system and to assist DBAs to improve the database performance. The thesis was divided into two parts. The first part described PostgreSQL database optimization technics in theory. In additional popular tools were also introduced. This part was based on PostgreSQL documentation, r...

  8. Trials by Juries: Suggested Practices for Database Trials

    Science.gov (United States)

    Ritterbush, Jon

    2012-01-01

    Librarians frequently utilize product trials to assess the content and usability of a database prior to committing funds to a new subscription or purchase. At the 2012 Electronic Resources and Libraries Conference in Austin, Texas, three librarians presented a panel discussion on their institutions' policies and practices regarding database…

  9. Health sciences libraries' subscriptions to journals: expectations of general practice departments and collection-based analysis.

    Science.gov (United States)

    Barreau, David; Bouton, Céline; Renard, Vincent; Fournier, Jean-Pascal

    2018-04-01

    The aims of this study were to (i) assess the expectations of general practice departments regarding health sciences libraries' subscriptions to journals and (ii) describe the current general practice journal collections of health sciences libraries. A cross-sectional survey was distributed electronically to the thirty-five university general practice departments in France. General practice departments were asked to list ten journals to which they expected access via the subscriptions of their health sciences libraries. A ranked reference list of journals was then developed. Access to these journals was assessed through a survey sent to all health sciences libraries in France. Adequacy ratios (access/need) were calculated for each journal. All general practice departments completed the survey. The total reference list included 44 journals. This list was heterogeneous in terms of indexation/impact factor, language of publication, and scope (e.g., patient care, research, or medical education). Among the first 10 journals listed, La Revue Prescrire (96.6%), La Revue du Praticien-Médecine Générale (90.9%), the British Medical Journal (85.0%), Pédagogie Médicale (70.0%), Exercer (69.7%), and the Cochrane Database of Systematic Reviews (62.5%) had the highest adequacy ratios, whereas Family Practice (4.2%), the British Journal of General Practice (16.7%), Médecine (29.4%), and the European Journal of General Practice (33.3%) had the lowest adequacy ratios. General practice departments have heterogeneous expectations in terms of health sciences libraries' subscriptions to journals. It is important for librarians to understand the heterogeneity of these expectations, as well as local priorities, so that journal access meets users' needs.

  10. 11 CFR 100.111 - Gift, subscription, loan, advance or deposit of money.

    Science.gov (United States)

    2010-01-01

    ... 11 Federal Elections 1 2010-01-01 2010-01-01 false Gift, subscription, loan, advance or deposit of... DEFINITIONS (2 U.S.C. 431) Definition of Expenditure (2 U.S.C. 431(9)) § 100.111 Gift, subscription, loan, advance or deposit of money. (a) A purchase, payment, distribution, loan (except for a loan made in...

  11. Data-Driven Transition: Joint Reporting of Subscription Expenditure and Publication Costs

    Directory of Open Access Journals (Sweden)

    Irene Barbers

    2018-04-01

    Full Text Available The transition process from the subscription model to the open access model in the world of scholarly publishing brings a variety of challenges to libraries. Within this evolving landscape, the present article takes a focus on budget control for both subscription and publication expenditure with the opportunity to enable the shift from one to the other. To reach informed decisions with a solid base of data to be used in negotiations with publishers, the diverse already-existing systems for managing publications costs and for managing journal subscriptions have to be adapted to allow comprehensive reporting on publication expenditure and subscription expenditure. In the case presented here, two separate systems are described and the establishment of joint reporting covering both these systems is introduced. Some of the results of joint reporting are presented as an example of how such a comprehensive monitoring can support management decisions and negotiations. On a larger scale, the establishment of the National Open Access Monitor in Germany is introduced, bringing together a diverse range of data from several already-existing systems, including, among others, holdings information, usage data, and data on publication fees. This system will enable libraries to access all relevant data with a single user interface.

  12. Global and regional emissions estimates of 1,1-difluoroethane (HFC-152a, CH[subscript 3]CHF[subscript 2]) from in situ and air archive observations

    OpenAIRE

    Prinn, Ronald G.

    2015-01-01

    High frequency, in situ observations from 11 globally distributed sites for the period 1994–2014 and archived air measurements dating from 1978 onward have been used to determine the global growth rate of 1,1-difluoroethane (HFC-152a, CH[subscript 3]CHF[subscript 2]). These observations have been combined with a range of atmospheric transport models to derive global emission estimates in a top-down approach. HFC-152a is a greenhouse gas with a short atmospheric lifetime of about 1.5 years. Si...

  13. A performance evaluation of in-memory databases

    Directory of Open Access Journals (Sweden)

    Abdullah Talha Kabakus

    2017-10-01

    Full Text Available The popularity of NoSQL databases has increased due to the need of (1 processing vast amount of data faster than the relational database management systems by taking the advantage of highly scalable architecture, (2 flexible (schema-free data structure, and, (3 low latency and high performance. Despite that memory usage is not major criteria to evaluate performance of algorithms, since these databases serve the data from memory, their memory usages are also experimented alongside the time taken to complete each operation in the paper to reveal which one uses the memory most efficiently. Currently there exists over 225 NoSQL databases that provide different features and characteristics. So it is necessary to reveal which one provides better performance for different data operations. In this paper, we experiment the widely used in-memory databases to measure their performance in terms of (1 the time taken to complete operations, and (2 how efficiently they use memory during operations. As per the results reported in this paper, there is no database that provides the best performance for all data operations. It is also proved that even though a RDMS stores its data in memory, its overall performance is worse than NoSQL databases.

  14. 47 CFR 73.642 - Subscription TV service.

    Science.gov (United States)

    2010-10-01

    ... expressed or implied, that: (1) Prevents or hinders it from rejecting or refusing any subscription TV..., service may be terminated. (ii) Charges, terms and conditions of service to subscribers must be applied... impositions of different sets of terms and conditions may be applied to subscribers in different...

  15. Synthesis and Characterization of a Perovskite Barium Zirconate (BaZrO[subscript 3]): An Experiment for an Advanced Inorganic Chemistry Laboratory

    Science.gov (United States)

    Thananatthanachon, Todsapon

    2016-01-01

    In this experiment, the students explore the synthesis of a crystalline solid-state material, barium zirconate (BaZrO3) by two different synthetic methods: (a) the wet chemical method using BaCl[subscript 2]·2H[subscript 2]O and ZrOCl[subscript 2]·8H[subscript 2]O as the precursors, and (b) the solid-state reaction from BaCO[subscript 3] and…

  16. mRNA and Protein Levels for GABA[subscript A][alpha]4, [alpha]5, [beta]1 and GABA[subscript B]R1 Receptors are Altered in Brains from Subjects with Autism

    Science.gov (United States)

    Fatemi, S. Hossein; Reutiman, Teri J.; Folsom, Timothy D.; Rooney, Robert J.; Patel, Diven H.; Thuras, Paul D.

    2010-01-01

    We have shown altered expression of gamma-aminobutyric acid A (GABA[subscript A]) and gamma-aminobutyric acid B (GABA[subscript B]) receptors in the brains of subjects with autism. In the current study, we sought to verify our western blotting data for GABBR1 via qRT-PCR and to expand our previous work to measure mRNA and protein levels of 3…

  17. New types of subscriptions for CERN GSM

    CERN Multimedia

    IT Department

    2010-01-01

    A recent renegotiation of our commercial conditions with our mobile telephony operator allows us today to deploy new GSM mobile services, reduce communication costs, as well as put in place a new subscription system. First of all, the "email to SMS" service has already been extended to all Swiss numbers. This service allows you to send SMS messages (Short Message Service) to any Swiss mobile telephone from your CERN e-mail account. For further details, please refer to the web site http://cern.ch/sms. The sending of MMS messages (Multi-media Message Service) will be activated by default on all CERN subscriptions by the end of March 2010. This service allows users to attach to a text message an image, a video or an audio recording. All the necessary details for configuring this new service on CERN mobile phones will be published on the web site http://cern.ch/mms. Concerning mobile service costs, new rates have been put in place since 1st January 2010. All tariffs have dramatically decrea...

  18. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  19. An Experiment Illustrating the Change in Ligand p"K"[subscript a] upon Protein Binding

    Science.gov (United States)

    Chenprakhon, Pirom; Panijpan, Bhinyo; Chaiyen, Pimchai

    2012-01-01

    The modulation of ligand p"K"[subscript a] due to its surrounding environment is a crucial feature that controls many biological phenomena. For example, the shift in the p"K"[subscript a] of substrates or catalytic residues at enzyme active sites upon substrate binding often triggers and controls enzymatic reactions. In this work, we developed an…

  20. Olfactory Bulb [alpha][subscript 2]-Adrenoceptor Activation Promotes Rat Pup Odor-Preference Learning via a cAMP-Independent Mechanism

    Science.gov (United States)

    Shakhawat, Amin MD.; Harley, Carolyn W.; Yuan, Qi

    2012-01-01

    In this study, three lines of evidence suggest a role for [alpha][subscript 2]-adrenoreceptors in rat pup odor-preference learning: olfactory bulb infusions of the [alpha][subscript 2]-antagonist, yohimbine, prevents learning; the [alpha][subscript 2]-agonist, clonidine, paired with odor, induces learning; and subthreshold clonidine paired with…

  1. Drug interaction databases in medical literature

    DEFF Research Database (Denmark)

    Kongsholm, Gertrud Gansmo; Nielsen, Anna Katrine Toft; Damkier, Per

    2015-01-01

    PURPOSE: It is well documented that drug-drug interaction databases (DIDs) differ substantially with respect to classification of drug-drug interactions (DDIs). The aim of this study was to study online available transparency of ownership, funding, information, classifications, staff training...... available transparency of ownership, funding, information, classifications, staff training, and underlying documentation varies substantially among various DIDs. Open access DIDs had a statistically lower score on parameters assessed....... and the three most commonly used subscription DIDs in the medical literature. The following parameters were assessed for each of the databases: Ownership, classification of interactions, primary information sources, and staff qualification. We compared the overall proportion of yes/no answers from open access...

  2. CERN Library - Reduction of subscriptions to scientific journals

    CERN Multimedia

    2005-01-01

    The Library Working Group for Acquisitions has identified some scientific journal subscriptions as candidates for cancellation. Although the 2005 budget is unchanged with respect to 2004 thanks to the efforts of the Management, it does not take account of inflation, which for many years has been much higher for scientific literature than the normal cost-of-living index. For 2006, the inflation rate is estimated to be 7-8%. Moreover, the Library does not only intend to compensate for the loss of purchasing power but also to make available some funds to promote new Open Access publishing models. (See Bulletin No.15/2005) The list of candidates can be found on the Library homepage (http://library.cern.ch/). In addition, some subscriptions will be converted to online-only, i.e. CERN will no longer order the print version of certain journals. We invite users to carefully check the list (http://library.cern.ch/). Comments on this proposal should be sent to the WGA Chairman, Rudiger Voss, with a copy to the Hea...

  3. Performance Enhancements for Advanced Database Management Systems

    OpenAIRE

    Helmer, Sven

    2000-01-01

    New applications have emerged, demanding database management systems with enhanced functionality. However, high performance is a necessary precondition for the acceptance of such systems by end users. In this context we developed, implemented, and tested algorithms and index structures for improving the performance of advanced database management systems. We focused on index structures and join algorithms for set-valued attributes.

  4. price list 2015.pdf | Subscription | Journals | Resources | public ...

    Indian Academy of Sciences (India)

    Home; public; Resources; Journals; Subscription; price list 2015.pdf. 404! error. The page your are looking for can not be found! Please check the link or use the navigation bar at the top. YouTube; Twitter; Facebook; Blog. Academy News. IAS Logo. 29th Mid-year meeting. Posted on 19 January 2018. The 29th Mid-year ...

  5. How Einstein Discovered "E[subscript 0] = mc[squared]"

    Science.gov (United States)

    Hecht, Eugene

    2012-01-01

    This paper traces Einstein's discovery of "the equivalence of mass [m] and energy ["E[subscript 0]"]." He came to that splendid insight in 1905 while employed by the Bern Patent Office, at which time he was not an especially ardent reader of physics journals. How then did the young savant, working outside of academia in semi-isolation, realize…

  6. 29 CFR 1905.7 - Form of documents; subscription; copies.

    Science.gov (United States)

    2010-07-01

    ... UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.7 Form of documents... 29 Labor 5 2010-07-01 2010-07-01 false Form of documents; subscription; copies. 1905.7 Section 1905.7 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  7. Immunoassay for Visualization of Protein-Protein Interactions on Ni-Nitrilotriacetate Support: Example of a Laboratory Exercise with Recombinant Heterotrimeric G[alpha][subscript i2][beta][subscript 1[gamma]2] Tagged by Hexahistidine from sf9 Cells

    Science.gov (United States)

    Bavec, Aljosa

    2004-01-01

    We have developed an "in vitro assay" for following the interaction between the [alpha][subscript i2] subunit and [beta][subscript 1[gamma]2] dimer from sf9 cells. This method is suitable for education purposes because it is easy, reliable, nonexpensive, can be applied for a big class of 20 students, and avoid the commonly used kinetic approach,…

  8. The Essence Of Government Shares Subscription A Review The Implementation Of State-Owned Enterprises

    Directory of Open Access Journals (Sweden)

    Urbanisasi

    2015-08-01

    Full Text Available The purpose of this study was to determine and explain the mechanisms and the implementation of government share subscription in the implementation of SOEs legal standing of the government shares subscription in the implementation of the state budget that separated in the implementation of SOEs and its legal implications of state loss or not and also legal accountability for losses arising out of shares subscription of SOE. In this study the authors use normative legal research. The data obtained in this study will be analyzed using qualitative normative method with inductive logic. Results from the study indicate that state shares subscription in the establishment of SOE or limited company with funds derived from State Budget are separated. Thus the government no longer has any authority in the field of civil law as a business entity. A clear separation of the status of country as business and as government organizer carries consequences. With the separation then there is clarity about the concept of the state financial losses. SOE as one form of business entity that aim to make a profit is a separate legal entity and has responsibilities that are separately anyway though formed and capital originating from the state finances and the loss of one transaction or in legal entity cannot be categorized as a state finance loss because the state has functioned as a private legal entity.

  9. GRID[subscript C] Renewable Energy Data Streaming into Classrooms

    Science.gov (United States)

    DeLuca, V. William; Carpenter, Pam; Lari, Nasim

    2010-01-01

    For years, researchers have shown the value of using real-world data to enhance instruction in mathematics, science, and social studies. In an effort to help develop students' higher-order thinking skills in a data-rich learning environment, Green Research for Incorporating Data in the Classroom (GRID[subscript C]), a National Science…

  10. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  11. Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems

    Science.gov (United States)

    Maraun, Michael; Gabriel, Stephanie

    2010-01-01

    In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…

  12. A trending database for human performance events

    International Nuclear Information System (INIS)

    Harrison, D.

    1993-01-01

    An effective Operations Experience program includes a standardized methodology for the investigation of unplanned events and a tool capable of retaining investigation data for the purpose of trending analysis. A database used in conjunction with a formalized investigation procedure for the purpose of trending unplanning event data is described. The database follows the structure of INPO's Human Performance Enhancement System for investigations. The database screens duplicate on-line the HPES evaluation Forms. All information pertaining to investigations is collected, retained and entered into the database using these forms. The database will be used for trending analysis to determine if any significant patterns exist, for tracking progress over time both within AECL and against industry standards, and for evaluating the success of corrective actions. Trending information will be used to help prevent similar occurrences

  13. Performance assessment of EMR systems based on post-relational database.

    Science.gov (United States)

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  14. Blockade of IP[subscript 3]-Mediated SK Channel Signaling in the Rat Medial Prefrontal Cortex Improves Spatial Working Memory

    Science.gov (United States)

    Brennan, Avis R.; Dolinsky, Beth; Vu, Mai-Anh T.; Stanley, Marion; Yeckel, Mark F.; Arnsten, Amy F. T.

    2008-01-01

    Planning and directing thought and behavior require the working memory (WM) functions of prefrontal cortex. WM is compromised by stress, which activates phosphatidylinositol (PI)-mediated IP[subscript 3]-PKC intracellular signaling. PKC overactivation impairs WM operations and in vitro studies indicate that IP[subscript 3] receptor (IP[subscript…

  15. Extending the Mertonian Norms: Scientists' Subscription to Norms of Research

    Science.gov (United States)

    Anderson, Melissa S.; Ronning, Emily A.; De Vries, Raymond; Martinson, Brian C.

    2010-01-01

    This analysis, based on focus groups and a national survey, assesses scientists' subscription to the Mertonian norms of science and associated counternorms. It also supports extension of these norms to governance (as opposed to administration), as a norm of decision-making, and quality (as opposed to quantity), as an evaluative norm. (Contains 1…

  16. Using a Spreadsheet to Solve the Schro¨dinger Equations for the Energies of the Ground Electronic State and the Two Lowest Excited States of H[subscript2

    Science.gov (United States)

    Ge, Yingbin; Rittenhouse, Robert C.; Buchanan, Jacob C.; Livingston, Benjamin

    2014-01-01

    We have designed an exercise suitable for a lab or project in an undergraduate physical chemistry course that creates a Microsoft Excel spreadsheet to calculate the energy of the S[subscript 0] ground electronic state and the S[subscript 1] and T[subscript 1] excited states of H[subscript 2]. The spreadsheet calculations circumvent the…

  17. Basic database performance tuning - developer's perspective

    CERN Document Server

    Kwiatek, Michal

    2008-01-01

    This lecture discusses selected database performance issues from the developer's point of view: connection overhead, bind variables and SQL injection, making most of the optimizer with up-to-date statistics, reading execution plans. Prior knowledge of SQL is expected.

  18. Downregulation of GABA[Subscript A] Receptor Protein Subunits a6, ß2, d, e, ?2, ?, and ?2 in Superior Frontal Cortex of Subjects with Autism

    Science.gov (United States)

    Fatemi, S. Hossein; Reutiman, Teri J.; Folsom, Timothy D.; Rustan, Oyvind G.; Rooney, Robert J.; Thuras, Paul D.

    2014-01-01

    We measured protein and mRNA levels for nine gamma-aminobutyric acid A (GABA[subscript A]) receptor subunits in three brain regions (cerebellum, superior frontal cortex, and parietal cortex) in subjects with autism versus matched controls. We observed changes in mRNA for a number of GABA[subscript A] and GABA[subscript B] subunits and overall…

  19. OPERA-a human performance database under simulated emergencies of nuclear power plants

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea

    2007-01-01

    In complex systems such as the nuclear and chemical industry, the importance of human performance related problems is well recognized. Thus a lot of effort has been spent on this area, and one of the main streams for unraveling human performance related problems is the execution of HRA. Unfortunately a lack of prerequisite information has been pointed out as the most critical problem in conducting HRA. From this necessity, OPERA database that can provide operators' performance data obtained under simulated emergencies has been developed. In this study, typical operators' performance data that are available from OPERA database are briefly explained. After that, in order to ensure the appropriateness of OPERA database, operators' performance data from OPERA database are compared with those of other studies and real events. As a result, it is believed that operators' performance data of OPERA database are fairly comparable to those of other studies and real events. Therefore it is meaningful to expect that OPERA database can be used as a serviceable data source for scrutinizing human performance related problems including HRA

  20. [Reading behavior and preferences regarding subscriptions to scientific journals : Results of a survey of members of the German Society for General and Visceral Surgery].

    Science.gov (United States)

    Ronellenfitsch, U; Klinger, C; Buhr, H J; Post, S

    2015-11-01

    The purpose of surgical literature is to publish the latest study results and to provide continuing medical education to readers. For optimal allocation of resources, institutional subscribers, professional societies and scientific publishers require structured data on reading and subscription preferences of potential readers of surgical literature. To obtain representative data on the preferences of German general and visceral surgeons regarding reading of and subscription to scientific journals. All members of the German Society for General and Visceral Surgery (DGAV) were invited to participate in a web-based survey. Questions were asked on the affiliation and position of the member, individual journal subscriptions, institutional access to scientific journals, preferences regarding electronic or print articles and special subscriptions for society members. Answers were descriptively analyzed. A total of 630 out of 4091 (15 %) members participated in the survey and 73 % of the respondents had at least 1 individual subscription to a scientific journal. The most frequently subscribed journal was Der Chirurg (47 % of respondents). The institutional access to journals was deemed insufficient by 48 % of respondents, predominantly in primary care hospitals and outpatient clinics. Almost half of the respondents gave sufficient importance to reading printed versions of articles for which they would pay extra fees. A group subscription for society members was perceived as advantageous as long as no relevant extra costs were incurred. This structured survey among members of the DGAV provides data on preferences regarding reading of and subscription to scientific journals. Individual subscriptions to journals are still common, possibly due to suboptimal institutional access particularly at smaller non-academic institutions. In an age of online publications it seems surprising that many respondents place a high value on printed versions. The results are relevant for

  1. Improved Information Retrieval Performance on SQL Database Using Data Adapter

    Science.gov (United States)

    Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.

    2018-02-01

    The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.

  2. Important announcement to INSPEC database users

    CERN Multimedia

    DSU Department

    2008-01-01

    The Library is in the process of transferring CERN’s subscription to the online INSPEC database to a new provider, which involves a new access platform. Users who have saved searches or alerts on the present system should record the details of their search strings as soon as possible whilst the old platform is still available, and manually move them to the new platform which will become available very soon. Access to the older platform will shortly be switched off, after which it will not be possible to access any saved information stored there. Access to the new platform will be available soon from the Library’s INSPEC page: http://library.cern.ch/information_resources/inspec.html

  3. Discrete Optimization of Internal Part Structure via SLM Unit Structure-Performance Database

    Directory of Open Access Journals (Sweden)

    Li Tang

    2018-01-01

    Full Text Available The structural optimization of the internal structure of parts based on three-dimensional (3D printing has been recognized as being important in the field of mechanical design. The purpose of this paper is to present a creation of a unit structure-performance database based on the selective laser melting (SLM, which contains various structural units with different functions and records their structure and performance characteristics so that we can optimize the internal structure of parts directly, according to the database. The method of creating the unit structure-performance database was introduced in this paper and several structural units of the unit structure-performance database were introduced. The bow structure unit was used to show how to create the structure-performance database of the unit as an example. Some samples of the bow structure unit were designed and manufactured by SLM. These samples were tested in the WDW-100 compression testing machine to obtain their performance characteristics. After this, the paper collected all data regarding unit structure parameters, weight, performance characteristics, and other data; and, established a complete set of data from the bow structure unit for the unit structure-performance database. Furthermore, an aircraft part was reconstructed conveniently to be more lightweight according to the unit structure-performance database. Its weight was reduced by 36.8% when compared with the original structure, while the strength far exceeded the requirements.

  4. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  5. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  6. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-28

    Many database applications, such as sequence comparing, sequence searching, and sequence matching, etc, process large database sequences. we introduce a novel and efficient technique to improve the performance of database applica- tions by using a Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm. The experimental results show that our Hybrid GPU/CPU technique improves the average performance by a factor of 2.2, and improves the peak performance by a factor of 2.8 when compared to earlier implementations. Copyright © 2011 by ASME.

  7. Performance analysis of different database in new internet mapping system

    Science.gov (United States)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  8. How Often Is p[subscript rep] Close to the True Replication Probability?

    Science.gov (United States)

    Trafimow, David; MacDonald, Justin A.; Rice, Stephen; Clason, Dennis L.

    2010-01-01

    Largely due to dissatisfaction with the standard null hypothesis significance testing procedure, researchers have begun to consider alternatives. For example, Killeen (2005a) has argued that researchers should calculate p[subscript rep] that is purported to indicate the probability that, if the experiment in question were replicated, the obtained…

  9. Managing XML Data to optimize Performance into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-06-01

    Full Text Available This paper propose some possibilities for manage XML data in order to optimize performance into object-relational databases. It is detailed the possibility of storing XML data into such databases, using for exemplification an Oracle database and there are tested some optimizing techniques of the queries over XMLType tables, like indexing and partitioning tables.

  10. 11 CFR 100.52 - Gift, subscription, loan, advance or deposit of money.

    Science.gov (United States)

    2010-01-01

    ... money. 100.52 Section 100.52 Federal Elections FEDERAL ELECTION COMMISSION GENERAL SCOPE AND DEFINITIONS..., advance or deposit of money. (a) A gift, subscription, loan (except for a loan made in accordance with 11 CFR 100.72 and 100.73), advance, or deposit of money or anything of value made by any person for the...

  11. Role of library's subscription licenses in promoting open access to scientific research

    KAUST Repository

    Buck, Stephen

    2018-01-01

    This presentation, based on KAUST’’s experience to date, will attempt to explain the different ways of bringing Open Access models to scientific Publisher’s licenses. Our dual approach with offset pricing is to redirect subscription money to publishing money and embed green open access deposition terms in understandable language in our license agreements. Resolving the inherent complexities in open access publishing, repository depositions and offsetting models will save libraries money and also time wasted on tedious and unnecessary administration work. Researchers will also save their time with overall clarity and transparency. This will enable trust and, where mistakes are made, and there inevitably will be with untried models, we can learn from these mistakes and make better, more robust services with auto deposition of our articles to our repository fed by Publishers’ themselves. The plan is to cover all Publishers with OA license terms for KAUST author’s right while continuing our subscription to them. There are marketing campaigns, awareness sessions are planned, in addition to establishing Libguides to help researchers, in addition to manage offset pricing models.

  12. Role of library's subscription licenses in promoting open access to scientific research

    KAUST Repository

    Buck, Stephen

    2018-04-30

    This presentation, based on KAUST’’s experience to date, will attempt to explain the different ways of bringing Open Access models to scientific Publisher’s licenses. Our dual approach with offset pricing is to redirect subscription money to publishing money and embed green open access deposition terms in understandable language in our license agreements. Resolving the inherent complexities in open access publishing, repository depositions and offsetting models will save libraries money and also time wasted on tedious and unnecessary administration work. Researchers will also save their time with overall clarity and transparency. This will enable trust and, where mistakes are made, and there inevitably will be with untried models, we can learn from these mistakes and make better, more robust services with auto deposition of our articles to our repository fed by Publishers’ themselves. The plan is to cover all Publishers with OA license terms for KAUST author’s right while continuing our subscription to them. There are marketing campaigns, awareness sessions are planned, in addition to establishing Libguides to help researchers, in addition to manage offset pricing models.

  13. Designing a database for performance assessment: Lessons learned from WIPP

    International Nuclear Information System (INIS)

    Martell, M.A.; Schenker, A.

    1997-01-01

    The Waste Isolation Pilot Plant (WIPP) Compliance Certification Application (CCA) Performance Assessment (PA) used a relational database that was originally designed only to supply the input parameters required for implementation of the PA codes. Reviewers used the database as a point of entry to audit quality assurance measures for control, traceability, and retrievability of input information used for analysis, and output/work products. During these audits it became apparent that modifications to the architecture and scope of the database would benefit the EPA regulator and other stakeholders when reviewing the recertification application. This paper contains a discussion of the WPP PA CCA database and lessons learned for designing a database

  14. Does SDDS Subscription Reduce Borrowing Costs for Emerging Market Economies?

    OpenAIRE

    John Cady

    2005-01-01

    Does macroeconomic data transparency-as signaled by subscription to the IMF's Special Data Dissemination Standard (SDDS)-help reduce borrowing costs in international capital markets? This question is examined using data on new issues of sovereign foreign-currency-denominated (U.S. dollar, yen, and euro) bonds for several emerging market economies. Panel econometric estimates indicate that spreads on new bond issues declined on average by close to 20 percent, or by an average of about 55 basis...

  15. Data Preparation Process for the Buildings Performance Database

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Dunn, Laurel; Mercado, Andrea; Brown, Richard E.; Mathew, Paul

    2014-06-30

    The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation process takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.

  16. Sorption, Diffusion and Solubility Databases for Performance Assessment

    International Nuclear Information System (INIS)

    Garcia Gutierrez, M.

    2000-01-01

    This report presents a deterministic and probabilistic databases for application in Performance Assessment of a high-level radioactive waste disposal. This work includes a theoretical description of sorption, diffusion and solubility phenomena of radionuclides in geological media. The report presents and compares the databases of different nuclear wastes management agencies, describes the materials in the Spanish reference system, and the results of sorption diffusion and solubility in this system, with both the deterministic and probabilistic approximation. The probabilistic approximation is presented in the form of probability density functions (pdf). (Author) 52 refs

  17. Development of comprehensive material performance database for nuclear applications

    International Nuclear Information System (INIS)

    Tsuji, Hirokazu; Yokoyama, Norio; Tsukada, Takashi; Nakajima, Hajime

    1993-01-01

    This paper introduces the present status of the comprehensive material performance database for nuclear applications, which was named JAERI Material Performance Database (JMPD), and examples of its utilization. The JMPD has been developed since 1986 in JAERI with a view to utilizing various kinds of characteristics data of nuclear materials efficiently. Management system of relational database, PLANNER, was employed, and supporting systems for data retrieval and output were expanded. In order to improve user-friendliness of the retrieval system, the menu selection type procedures have been developed where knowledge of the system or the data structures are not required for end-users. As to utilization of the JMPD, two types of data analyses are mentioned as follows: (1) A series of statistical analyses was performed in order to estimate the design values both of the yield strength (Sy) and the tensile strength (Su) for aluminum alloys which are widely used as structural materials for research reactors. (2) Statistical analyses were accomplished by using the cyclic crack growth rate data for nuclear pressure vessel steels, and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and ΔK-constant type tests. (author)

  18. Comparison of Cloud backup performance and costs in Oracle database

    OpenAIRE

    Aljaž Zrnec; Dejan Lavbič

    2011-01-01

    Current practice of backing up data is based on using backup tapes and remote locations for storing data. Nowadays, with the advent of cloud computing a new concept of database backup emerges. The paper presents the possibility of making backup copies of data in the cloud. We are mainly focused on performance and economic issues of making backups in the cloud in comparison to traditional backups. We tested the performance and overall costs of making backup copies of data in Oracle database u...

  19. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  20. Performance Evaluation of Cloud Database and Traditional Database in terms of Response Time while Retrieving the Data

    OpenAIRE

    Donkena, Kaushik; Gannamani, Subbarayudu

    2012-01-01

    Context: There has been an exponential growth in the size of the databases in the recent times and the same amount of growth is expected in the future. There has been a firm drop in the storage cost followed by a rapid increase in t he storage capacity. The entry of Cloud in the recent times has changed the equations. The Performance of the Database plays a vital role in the competition. In this research, an attempt has been made to evaluate and compare the performance of the traditional data...

  1. A database for human performance under simulated emergencies of nuclear power plants

    International Nuclear Information System (INIS)

    Park, Jin Kyun; Jung, Won Dea

    2005-01-01

    Reliable human performance is a prerequisite in securing the safety of complicated process systems such as nuclear power plants. However, the amount of available knowledge that can explain why operators deviate from an expected performance level is so small because of the infrequency of real accidents. Therefore, in this study, a database that contains a set of useful information extracted from simulated emergencies was developed in order to provide important clues for understanding the change of operators' performance under stressful conditions (i.e., real accidents). The database was developed under Microsoft Windows TM environment using Microsoft Access 97 TM and Microsoft Visual Basic 6.0 TM . In the database, operators' performance data obtained from the analysis of over 100 audio-visual records for simulated emergencies were stored using twenty kinds of distinctive data fields. A total of ten kinds of operators' performance data are available from the developed database. Although it is still difficult to predict operators' performance under stressful conditions based on the results of simulated emergencies, simulation studies remain the most feasible way to scrutinize performance. Accordingly, it is expected that the performance data of this study will provide a concrete foundation for understanding the change of operators' performance in emergency situations

  2. The L-Type Voltage-Gated Calcium Channel Ca [subscript V] 1.2 Mediates Fear Extinction and Modulates Synaptic Tone in the Lateral Amygdala

    Science.gov (United States)

    Temme, Stephanie J.; Murphy, Geoffrey G.

    2017-01-01

    L-type voltage-gated calcium channels (LVGCCs) have been implicated in both the formation and the reduction of fear through Pavlovian fear conditioning and extinction. Despite the implication of LVGCCs in fear learning and extinction, studies of the individual LVGCC subtypes, Ca[subscript V]1.2 and Ca[subscript V] 1.3, using transgenic mice have…

  3. Operational Experience of an Open-Access, Subscription-Based Mass Spectrometry and Proteomics Facility

    Science.gov (United States)

    Williamson, Nicholas A.

    2018-03-01

    This paper discusses the successful adoption of a subscription-based, open-access model of service delivery for a mass spectrometry and proteomics facility. In 2009, the Mass Spectrometry and Proteomics Facility at the University of Melbourne (Australia) moved away from the standard fee for service model of service provision. Instead, the facility adopted a subscription- or membership-based, open-access model of service delivery. For a low fixed yearly cost, users could directly operate the instrumentation but, more importantly, there were no limits on usage other than the necessity to share available instrument time with all other users. All necessary training from platform staff and many of the base reagents were also provided as part of the membership cost. These changes proved to be very successful in terms of financial outcomes for the facility, instrument access and usage, and overall research output. This article describes the systems put in place as well as the overall successes and challenges associated with the operation of a mass spectrometry/proteomics core in this manner. [Figure not available: see fulltext.

  4. 31 CFR 344.8 - What other provisions apply to subscriptions for Demand Deposit securities?

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What other provisions apply to subscriptions for Demand Deposit securities? 344.8 Section 344.8 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...

  5. GUC100 multisensor fingerprint database for in-house (semipublic) performance test

    OpenAIRE

    Gafurov D.; Bours P.; Yang B.; Busch C.

    2010-01-01

    For evaluation of biometric performance of biometric components and system, the availability of independent databases and desirably independent evaluators is important. Both databases of significant size and independent testing institutions provide the precondition for fair and unbiased benchmarking. In order to show generalization capabilities of the system under test, it is essential that algorithm developers do not have access to the testing database, and thus the risk of tuned algorithms...

  6. Comparison of Cloud backup performance and costs in Oracle database

    Directory of Open Access Journals (Sweden)

    Aljaž Zrnec

    2011-06-01

    Full Text Available Normal 0 21 false false false SL X-NONE X-NONE Current practice of backing up data is based on using backup tapes and remote locations for storing data. Nowadays, with the advent of cloud computing a new concept of database backup emerges. The paper presents the possibility of making backup copies of data in the cloud. We are mainly focused on performance and economic issues of making backups in the cloud in comparison to traditional backups. We tested the performance and overall costs of making backup copies of data in Oracle database using Amazon S3 and EC2 cloud services. The costs estimation was performed on the basis of the prices published on Amazon S3 and Amazon EC2 sites.

  7. Rethinking the Subscription Paradigm for Journals: Using Interlibrary Loan in Collection Development for Serials

    Science.gov (United States)

    Barton, Gail Perkins; Relyea, George E.; Knowlton, Steven A.

    2018-01-01

    Many librarians evaluate local Interlibrary Loan (ILL) statistics as part of collection development decisions concerning new subscriptions. In this study, the authors examine whether the number of ILL article requests received in one academic year can predict the use of those same journal titles once they are added as library resources. There is…

  8. Non-Exercise Estimation of VO[subscript 2]max Using the International Physical Activity Questionnaire

    Science.gov (United States)

    Schembre, Susan M.; Riebe, Deborah A.

    2011-01-01

    Non-exercise equations developed from self-reported physical activity can estimate maximal oxygen uptake (VO[subscript 2]max) as well as sub-maximal exercise testing. The International Physical Activity Questionnaire is the most widely used and validated self-report measure of physical activity. This study aimed to develop and test a VO[subscript…

  9. Performance Evaluation of a Database System in a Multiple Backend Configurations,

    Science.gov (United States)

    1984-10-01

    leaving a systemn process , the * internal performance measuremnents of MMSD have been carried out. Mathodo lo.- gies for constructing test databases...access d i rectory data via the AT, EDIT, and CDT. In designing the test database, one of the key concepts is the choice of the directory attributes in...internal timing. These requests are selected since they retrieve the seIaI lest portion of the test database and the processing time for each request is

  10. A high performance, ad-hoc, fuzzy query processing system for relational databases

    Science.gov (United States)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  11. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study.

    Science.gov (United States)

    Léon, Grégory; Ouimet, Mathieu; Lavis, John N; Grimshaw, Jeremy; Gagnon, Marie-Pierre

    2013-03-21

    Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries' average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries (i.e. Manitoba Health, Northwest

  12. Understanding, modeling, and improving main-memory database performance

    OpenAIRE

    Manegold, S.

    2002-01-01

    textabstractDuring the last two decades, computer hardware has experienced remarkable developments. Especially CPU (clock-)speed has been following Moore's Law, i.e., doubling every 18 months; and there is no indication that this trend will change in the foreseeable future. Recent research has revealed that database performance, even with main-memory based systems, can hardly benefit from the ever increasing CPU power. The reason for this is that the performance of other hardware components h...

  13. Magazine "Companion Websites" and the Demand for Newsstand Sales and Subscriptions

    DEFF Research Database (Denmark)

    Kaiser, Ulrich; Kongsted, H.C.

    2012-01-01

    analysis finds some support for the widespread belief that the Internet cannibalizes print media. On average, a 1% increase in companion website traffic is associated with a weakly significant decrease in total print circulation by 0.15%. This association is mainly driven by a statistically significant...... and negative mapping between website visits and kiosk sales, although they do not find any statistically significant relationship between website visits and subscriptions. The latter finding is reassuring for publishers because advertisers value a large subscriber base. Moreover, the authors show...

  14. Peer review quality and transparency of the peer-review process in open access and subscription journals

    NARCIS (Netherlands)

    Wicherts, J.M.

    2016-01-01

    Background Recent controversies highlighting substandard peer review in Open Access (OA) and traditional (subscription) journals have increased the need for authors, funders, publishers, and institutions to assure quality of peer-review in academic journals. I propose that transparency of the

  15. Oracle database 12c release 2 in-memory tips and techniques for maximum performance

    CERN Document Server

    Banerjee, Joyjeet

    2017-01-01

    This Oracle Press guide shows, step-by-step, how to optimize database performance and cut transaction processing time using Oracle Database 12c Release 2 In-Memory. Oracle Database 12c Release 2 In-Memory: Tips and Techniques for Maximum Performance features hands-on instructions, best practices, and expert tips from an Oracle enterprise architect. You will learn how to deploy the software, use In-Memory Advisor, build queries, and interoperate with Oracle RAC and Multitenant. A complete chapter of case studies illustrates real-world applications. • Configure Oracle Database 12c and construct In-Memory enabled databases • Edit and control In-Memory options from the graphical interface • Implement In-Memory with Oracle Real Application Clusters • Use the In-Memory Advisor to determine what objects to keep In-Memory • Optimize In-Memory queries using groups, expressions, and aggregations • Maximize performance using Oracle Exadata Database Machine and In-Memory option • Use Swingbench to create d...

  16. Piracy versus Netflix : Subscription Video on Demand Dissatisfaction as an Antecedent of Piracy

    OpenAIRE

    Riekkinen, Janne

    2018-01-01

    Drawing from cognitive dissonance and neutralization theories, this study seeks to improve the understanding on consumer decision-making between the current legal and illegal video consumption alternatives. We develop and test a research model featuring Subscription Video on Demand (SVOD) satisfaction and various dimensions of SVOD quality as antecedents of video piracy neutralizations and attitudes. Based on results from an online survey among Finnish SVOD users, SVOD satisfaction is primari...

  17. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  18. Integral Representation of the Pictorial Proof of Sum of [superscript n][subscript k=1]k[superscript 2] = 1/6n(n+1)(2n+1)

    Science.gov (United States)

    Kobayashi, Yukio

    2011-01-01

    The pictorial proof of the sum of [superscript n][subscript k=1] k[superscript 2] = 1/6n(n+1)(2n+1) is represented in the form of an integral. The integral representations are also applicable to the sum of [superscript n][subscript k-1] k[superscript m] (m greater than or equal to 3). These representations reveal that the sum of [superscript…

  19. Database on Performance of Neutron Irradiated FeCrAl Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Field, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Briggs, Samuel A. [Univ. of Wisconsin, Madison, WI (United States); Littrell, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Parish, Chad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yamamoto, Yukinori [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    The present report summarizes and discusses the database on radiation tolerance for Generation I, Generation II, and commercial FeCrAl alloys. This database has been built upon mechanical testing and microstructural characterization on selected alloys irradiated within the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) up to doses of 13.8 dpa at temperatures ranging from 200°C to 550°C. The structure and performance of these irradiated alloys were characterized using advanced microstructural characterization techniques and mechanical testing. The primary objective of developing this database is to enhance the rapid development of a mechanistic understanding on the radiation tolerance of FeCrAl alloys, thereby enabling informed decisions on the optimization of composition and microstructure of FeCrAl alloys for application as an accident tolerant fuel (ATF) cladding. This report is structured to provide a brief summary of critical results related to the database on radiation tolerance of FeCrAl alloys.

  20. Dissonance and Neutralization of Subscription Streaming Era Digital Music Piracy : An Initial Exploration

    OpenAIRE

    Riekkinen, Janne

    2016-01-01

    Both legal and illegal forms of digital music consumption continue to evolve with wider adoption of subscription streaming services. With this paper, we aim to extend theory on digital music piracy by showing that the rising controversy and diminishing acceptance of illegal forms of consumption call for new theoretical components and interactions. We introduce a model that integrates insights from neutralization and cognitive dissonance theories. As an initial empirical test of th...

  1. Nanosized TiO[subscript 2] for Photocatalytic Water Splitting Studied by Oxygen Sensor and Data Logger

    Science.gov (United States)

    Zhang, Ruinan; Liu, Song; Yuan, Hongyan; Xiao, Dan; Choi, Martin M. F.

    2012-01-01

    Photocatalytic water splitting by semiconductor photocatalysts has attracted considerable attention in the past few decades. In this experiment, nanosized titanium dioxide (nano-TiO[subscript 2]) particles are used to photocatalytically split water, which is then monitored by an oxygen sensor. Sacrificial reagents such as organics (EDTA) and metal…

  2. Compilation and comparison of radionuclide sorption databases used in recent performance assessments

    International Nuclear Information System (INIS)

    McKinley, I.G.; Scholtis, A.

    1992-01-01

    The aim of this paper is to review the radionuclide sorption databases which have been used in performance assessments published within the last decade. It was hoped that such a review would allow areas of consistency to be identified, possibly indicating nuclide/rock/water systems which are now well characterised. Inconsistencies, on the other hand, might indicate areas in which further work is required. This study followed on from a prior review of the various databases which had been used in Swiss performance assessments. The latter was, however, considerably simplified by the fact that the authors had been heavily involved in sorption database definition for these assessments. The first phase of the current study was based entirely on available literature and it was quickly evident that the analyses would be much more complex (and time consuming) than initially envisaged. While some assessments clearly list all sorption data used, others depend on secondary literature (which may or may not be clearly referenced) or present sorption data which has been transmogrified into another form (e.g. into a retardation factor -c.f. following section). This study focused on database used (or intended) for performance assessment which have been published within the last 10 years or so. 45 refs., 12 tabs., 1 fig

  3. Quality Assurance in Individual Monitoring: 10 Years of Performance Monitoring of the TLD Based TNO Individual Monitoring Service (invited paper)

    Energy Technology Data Exchange (ETDEWEB)

    Dijk, J.W.E. van

    1998-07-01

    The QA subscription forms the nucleus of the Quality Assurance (QA) programme of the TLD-based Individual Monitoring Service of TNO-CSD. This QA subscription is the subscription of a dummy customer to the service. As this customer is treated exactly like a normal customer, all aspects of the service are monitored by the QA subscription. An overview is given of 10 years of monitoring the performance of the service. Various improvements over the past decade have resulted in a standard deviation in a low dose measurement of 0.01 mSv and a relative standard deviation at higher doses of 5%. These figures represent the performance under routine circumstances and thus include variations due to variations in the natural background from place to place and, for example, due to transport. (author)

  4. Quality Assurance in Individual Monitoring: 10 Years of Performance Monitoring of the TLD Based TNO Individual Monitoring Service (invited paper)

    International Nuclear Information System (INIS)

    Dijk, J.W.E. van

    1998-01-01

    The QA subscription forms the nucleus of the Quality Assurance (QA) programme of the TLD-based Individual Monitoring Service of TNO-CSD. This QA subscription is the subscription of a dummy customer to the service. As this customer is treated exactly like a normal customer, all aspects of the service are monitored by the QA subscription. An overview is given of 10 years of monitoring the performance of the service. Various improvements over the past decade have resulted in a standard deviation in a low dose measurement of 0.01 mSv and a relative standard deviation at higher doses of 5%. These figures represent the performance under routine circumstances and thus include variations due to variations in the natural background from place to place and, for example, due to transport. (author)

  5. JAERI Material Performance Database (JMPD); outline of the system

    International Nuclear Information System (INIS)

    Yokoyama, Norio; Tsukada, Takashi; Nakajima, Hajime.

    1991-01-01

    JAERI Material Performance Database (JMPD) has been developed since 1986 in JAERI with a view to utilizing the various kinds of characteristic data of nuclear materials efficiently. Management system of relational database, PLANNER was employed and supporting systems for data retrieval and output were expanded. JMPD is currently serving the following data; (1) Data yielded from the research activities of JAERI including fatigue crack growth data of LWR pressure vessel materials as well as creep and fatigue data of the alloy developed for the High Temperature Gas-cooled Reactor (HTGR), Hastelloy XR. (2) Data of environmentally assisted cracking of LWR materials arranged by Electric power Research Institute (EPRI) including fatigue crack growth data (3000 tests), stress corrosion data (500 tests) and Slow Strain Rate Technique (SSRT) data (1000 tests). In order to improve user-friendliness of retrieval system, the menu selection type procedures have been developed where knowledge of system and data structures are not required for end-users. In addition a retrieval via database commands, Structured Query Language (SQL), is supported by the relational database management system. In JMPD the retrieved data can be processed readily through supporting systems for graphical and statistical analyses. The present report outlines JMPD and describes procedures for data retrieval and analyses by utilizing JMPD. (author)

  6. Bringing together the work of subscription and open access specialists: challenges and changes at the University of Sussex

    Directory of Open Access Journals (Sweden)

    Eleanor Craig

    2017-03-01

    Full Text Available The rise in open access (OA publishing has required library staff across many UK academic institutions to take on new roles and responsibilities to support academics. At the same time, the long-established work of negotiating with publishers around journal subscriptions is changing as such deals now usually include OA payment or discount plans in many different forms that vary from publisher to publisher. This article outlines some of the issues we encountered at the University of Sussex Library whilst trying to pull together the newer strand of OA advocacy and funder compliance work with existing responsibilities for managing subscription deals. It considers the challenges faced in effectively bringing together Library staff with knowledge in these areas, and outlines the steps we have taken so far to ensure OA publishing is taken into account wherever appropriate.

  7. Effects of GABA[subscript A] Modulators on the Repeated Acquisition of Response Sequences in Squirrel Monkeys

    Science.gov (United States)

    Campbell, Una C.; Winsauer, Peter J.; Stevenson, Michael W.; Moerschbaecher, Joseph M.

    2004-01-01

    The present study investigated the effects of positive and negative GABA[subscript A] modulators under three different baselines of repeated acquisition in squirrel monkeys in which the monkeys acquired a three-response sequence on three keys under a second-order fixed-ratio (FR) schedule of food reinforcement. In two of these baselines, the…

  8. Interlocking Boards and Firm Performance: Evidence from a New Panel Database

    NARCIS (Netherlands)

    M.C. Non (Marielle); Ph.H.B.F. Franses (Philip Hans)

    2007-01-01

    textabstractAn interlock between two firms occurs if the firms share one or more directors in their boards of directors. We explore the effect of interlocks on firm performance for 101 large Dutch firms using a large and new panel database. We use five different performance measures, and for each

  9. The A[subscript 1c] Blood Test: An Illustration of Principles from General and Organic Chemistry

    Science.gov (United States)

    Kerber, Robert C.

    2007-01-01

    The glycated hemoglobin blood test, usually designated as the A[subscript 1c] test, is a key measure of the effectiveness of glucose control in diabetics. The chemistry of glucose in the bloodstream, which underlies the test and its impact, provides an illustration of the importance of chemical equilibrium and kinetics to a major health problem.…

  10. Performance of popular open source databases for HEP related computing problems

    International Nuclear Information System (INIS)

    Kovalskyi, D; Sfiligoi, I; Wuerthwein, F; Yagil, A

    2014-01-01

    Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.

  11. An Integrated Database of Unit Training Performance: Description an Lessons Learned

    National Research Council Canada - National Science Library

    Leibrecht, Bruce

    1997-01-01

    The Army Research Institute (ARI) has developed a prototype relational database for processing and archiving unit performance data from home station, training area, simulation based, and Combat Training Center training exercises...

  12. A performance study on the synchronisation of heterogeneous Grid databases using CONStanza

    CERN Document Server

    Pucciani, G; Domenici, Andrea; Stockinger, Heinz

    2010-01-01

    In Grid environments, several heterogeneous database management systems are used in various administrative domains. However, data exchange and synchronisation need to be available across different sites and different database systems. In this article we present our data consistency service CONStanza and give details on how we achieve relaxed update synchronisation between different database implementations. The integration in existing Grid environments is one of the major goals of the system. Performance tests have been executed following a factorial approach. Detailed experimental results and a statistical analysis are presented to evaluate the system components and drive future developments. (C) 2010 Elsevier B.V. All rights reserved.

  13. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  14. Ground Reaction Force Differences in the Countermovement Jump in Girls with Different Levels of Performance

    Science.gov (United States)

    Floría, Pablo; Harrison, Andrew J.

    2013-01-01

    Purpose: The aim of this study was to ascertain the biomechanical differences between better and poorer performers of the vertical jump in a homogeneous group of children. Method: Twenty-four girls were divided into low-scoring (LOW; M [subscript age] = 6.3 ± 0.8 years) and high-scoring (HIGH; M [subscript age] = 6.6 ± 0.8 years) groups based on…

  15. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  16. Oracle Database 11gR2 Performance Tuning Cookbook

    CERN Document Server

    Fiorillo, Ciro

    2012-01-01

    In this book you will find both examples and theoretical concepts covered. Every recipe is based on a script/procedure explained step-by-step, with screenshots, while theoretical concepts are explained in the context of the recipe, to explain why a solution performs better than another. This book is aimed at software developers, software and data architects, and DBAs who are using or are planning to use the Oracle Database, who have some experience and want to solve performance problems faster and in a rigorous way. If you are an architect who wants to design better applications, a DBA who is

  17. Locational Prices in Capacity Subscription Market Considering Transmission Limitations

    Directory of Open Access Journals (Sweden)

    S. Babaeinejad Sarookolaee

    2013-06-01

    Full Text Available This study focuses on one of the most effective type of capacity markets named Capacity Subscription (CS market which is predicted to be widely used in the upcoming smart grids. Despite variant researches done about the mechanism and structure of capacity markets, their performances have been rarely tested in the presence of network constraints. Considering this deficiency, we tried to propose a new method to determine capacity prices in the network considering the transmission line flow limitations named Local capacity Prices (LP. This method is quite new and has not been tried before in any other similar researches. The philosophy of the proposed method is to determine capacity prices considering each consumer share of total peak demand. The first advantage of LP is that the consumers who benefit from the transmission facilities and are the responsible for transmission congestions, pay higher capacity prices than those whom their needed electricity is prepared locally. The second advantage of LP is that consumers connected to the same bus do not have to pay same capacity price due to their different shares of total peak demand. For more clarification, two other different methods named Branches Flow limit as a Global Limit (BFGL and Locational Capacity Prices (LCP are proposed and compared to the LP method in order to show LP method efficiency. The numerical results obtained from case studies show that the LP method follows more justice market procedure which results in more efficient capacity prices in comparison to BFGL and LCP methods.

  18. The performance of disk arrays in shared-memory database machines

    Science.gov (United States)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  19. The IPE Database: providing information on plant design, core damage frequency and containment performance

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Su, T.; Danziger, L.

    1996-01-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission's (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database

  20. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  1. Downsizing a database platform for increased performance and decreased costs

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.M.; Tolendino, L.F.

    1993-06-01

    Technological advances in the world of microcomputers have brought forth affordable systems and powerful software than can compete with the more traditional world of minicomputers. This paper describes an effort at Sandia National Laboratories to decrease operational and maintenance costs and increase performance by moving a database system from a minicomputer to a microcomputer.

  2. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  3. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  4. Structural Design of HRA Database using generic task for Quantitative Analysis of Human Performance

    International Nuclear Information System (INIS)

    Kim, Seung Hwan; Kim, Yo Chan; Choi, Sun Yeong; Park, Jin Kyun; Jung Won Dea

    2016-01-01

    This paper describes a design of generic task based HRA database for quantitative analysis of human performance in order to estimate the number of task conductions. The estimation method to get the total task conduction number using direct counting is not easy to realize and maintain its data collection framework. To resolve this problem, this paper suggests an indirect method and a database structure using generic task that enables to estimate the total number of conduction based on instructions of operating procedures of nuclear power plants. In order to reduce human errors, therefore, all information on the human errors taken by operators in the power plant should be systematically collected and examined in its management. Korea Atomic Energy Research Institute (KAERI) is carrying out a research to develop a data collection framework to establish a Human Reliability Analysis (HRA) database that could be employed as technical bases to generate human error probabilities (HEPs) and performance shaping factors (PSFs)]. As a result of the study, the essential table schema was designed to the generic task database which stores generic tasks, procedure lists and task tree structures, and other supporting tables. The number of task conduction based on the operating procedures for HEP estimation was enabled through the generic task database and framework. To verify the framework applicability, case study for the simulated experiments was performed and analyzed using graphic user interfaces developed in this study.

  5. Economics of Scholarly Publishing: Exploring the Causes of Subscription Price Variations of Scholarly Journals in Business Subject-Specific Areas

    Science.gov (United States)

    Liu, Lewis G.

    2011-01-01

    This empirical research investigates subscription price variations of scholarly journals in five business subject-specific areas using the semilogarithmic regression model. It has two main purposes. The first is to address the unsettled debate over whether or not and to what extent commercial publishers reap monopoly profits by overcharging…

  6. Pursuit of a scalable high performance multi-petabyte database

    CERN Document Server

    Hanushevsky, A

    1999-01-01

    When the BaBar experiment at the Stanford Linear Accelerator Center starts in April 1999, it will generate approximately 200 TB/year of data at a rate of 10 MB/sec for 10 years. A mere six years later, CERN, the European Laboratory for Particle Physics, will start an experiment whose data storage requirements are two orders of magnitude larger. In both experiments, all of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). The quantity and rate at which the data is produced requires the use of a high performance hierarchical mass storage system in place of a standard Unix file system. Furthermore, the distributed nature of the experiment, involving scientists from 80 Institutions in 10 countries, also requires an extended security infrastructure not commonly found in standard Unix file systems. The combination of challenges that must be overcome in order to effectively deal with a multi-petabyte object oriented database is substantial. Our particular approach...

  7. Sorption databases for increasing confidence in performance assessment - 16053

    International Nuclear Information System (INIS)

    Richter, Anke; Brendler, Vinzenz; Nebelung, Cordula; Payne, Timothy E.; Brasser, Thomas

    2009-01-01

    requires that all mineral constituents of the solid phase are characterized. Another issue is the large number of required parameters combined with time-consuming iterations. Addressing both approaches, we present two sorption databases, developed mainly by or under participation of the Forschungszentrum Dresden-Rossendorf (FZD). Both databases are implemented as relational databases, assist identification of critical data gaps and the evaluation of existing parameter sets, provide web based data search and analyses and permit the comparison of SCM predictions with K d values. RES 3 T (Rossendorf Expert System for Surface and Sorption Thermodynamics) is a digitized thermodynamic sorption database (see www.fzd.de/db/RES3T.login) and free of charge. It is mineral-specific and can therefore also be used for additive models of more complex solid phases. ISDA (Integrated Sorption Database System) connects SCM with the K d concept but focuses on conventional K d . The integrated datasets are accessible through a unified user interface. An application case, K d values in Performance Assessment, is given. (authors)

  8. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  9. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  10. The Role of Subscription-Based Patrol and Restitution in the Future of Liberty

    Directory of Open Access Journals (Sweden)

    Gil Guillory

    2009-02-01

    Full Text Available Market anarchists are often keen to know how we might rid ourselves of the twin evils institutionalized in the state: taxation and monopoly. A possible future history for North America is suggested, focusing upon the implications of the establishment of a subscription-based patrol and restitution business sector. We favor Rothbard over Higgs regarding crises and liberty. We favor Barnett over Rothbard regarding vertical integration of security. We examine derived demand for adjudication, mediation and related goods; and we advance the thesis that private adjudication will tend to libertarianly just decisions. We show how firms will actively build civil society, strengthening and coordinating Nisbettian intermediating institutions.

  11. Prediction of VO[subscript 2]max in Children and Adolescents Using Exercise Testing and Physical Activity Questionnaire Data

    Science.gov (United States)

    Black, Nate E.; Vehrs, Pat R.; Fellingham, Gilbert W.; George, James D.; Hager, Ron

    2016-01-01

    Purpose: The purpose of this study was to evaluate the use of a treadmill walk-jog-run exercise test previously validated in adults and physical activity questionnaire data to estimate maximum oxygen consumption (VO[subscript 2]max) in boys (n = 62) and girls (n = 66) aged 12 to 17 years old. Methods: Data were collected from Physical Activity…

  12. Promoting public transport as a subscription service: Effects of a free month travel card

    DEFF Research Database (Denmark)

    Thøgersen, John

    2009-01-01

    Newspapers, book clubs, telephone services and many other subscription services are often marketed to new customers by means of a free or substantially discounted trial period. This article evaluates this method as a means to promote commuting by public transport in a field experiment and based...... that had an effect was the free month travel card, which led to a significant increase in commuting by public transport.As expected, the effect was mediated through a change in behavioural intentions rather than a change in perceived constraints. As expected, the effect became weaker when the promotion...

  13. Using the "K[subscript 5]Connected Cognition Diagram" to Analyze Teachers' Communication and Understanding of Regions in Three-Dimensional Space

    Science.gov (United States)

    Moore-Russo, Deborah; Viglietti, Janine M.

    2012-01-01

    This paper reports on a study that introduces and applies the "K[subscript 5]Connected Cognition Diagram" as a lens to explore video data showing teachers' interactions related to the partitioning of regions by axes in a three-dimensional geometric space. The study considers "semiotic bundles" (Arzarello, 2006), introduces "semiotic connections,"…

  14. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    Science.gov (United States)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  15. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  16. Effect of Eight Weekly Aerobic Training Program on Auditory Reaction Time and MaxVO[subscript 2] in Visual Impairments

    Science.gov (United States)

    Taskin, Cengiz

    2016-01-01

    The aim of study was to examine the effect of eight weekly aerobic exercises on auditory reaction time and MaxVO[subscript 2] in visual impairments. Forty visual impairment children that have blind 3 classification from the Turkey, experimental group; (age = 15.60 ± 1.10 years; height = 164.15 ± 4.88 cm; weight = 66.60 ± 4.77 kg) for twenty…

  17. Declining subscriptions to the Maliando Mutual Health Organisation in Guinea-Conakry (West Africa): what is going wrong?

    Science.gov (United States)

    Criel, Bart; Waelkens, Maria Pia

    2003-10-01

    Mutual Health Organisations (MHOs) are a type of community health insurance scheme that are being developed and promoted in sub-Saharan Africa. In 1998, an MHO was organised in a rural district of Guinea to improve access to quality health care. Households paid an annual insurance fee of about US$2 per individual. Contributions were voluntary. The benefit package included free access to all first line health care services (except for a small co-payment), free paediatric care, free emergency surgical care and free obstetric care at the district hospital. Also included were part of the cost of emergency transport to the hospital. In 1998, the MHO covered 8% of the target population, but, by 1999, the subscription rate had dropped to about 6%. In March 2000, focus groups were held with members and non-members of the scheme to find out why subscription rates were so low. The research indicated that a failure to understand the scheme does not explain these low rates. On the contrary, the great majority of research subjects, members and non-members alike, acquired a very accurate understanding of the concepts and principles underlying health insurance. They value the system's re-distributive effects, which goes beyond household, next of kin or village. The participants accurately point out the sharp differences that exist between traditional financial mechanisms and the principle of health insurance, as well as the advantages and disadvantages of both. The ease with which risk-pooling is accepted as a financial mechanism which addresses specific needs demonstrates that it is not, per se, necessary to build health insurance schemes on existing or traditional systems of mutual aid. The majority of the participants consider the individual premium of 2 US dollars to be fair. There is, however, a problem of affordability for many poor and/or large families who cannot raise enough money to pay the subscription for all household members in one go. However, the main reason for

  18. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  19. Experimental Determination of pK[subscript a] Values and Metal Binding for Biomolecular Compounds Using [superscript 31]P NMR Spectroscopy

    Science.gov (United States)

    Swartz, Mason A.; Tubergen, Philip J.; Tatko, Chad D.; Baker, Rachael A.

    2018-01-01

    This lab experiment uses [superscript 31]P NMR spectroscopy of biomolecules to determine pK[subscript a] values and the binding energies of metal/biomolecule complexes. Solutions of adenosine nucleotides are prepared, and a series of [superscript 31]P NMR spectra are collected as a function of pH and in the absence and presence of magnesium or…

  20. Determination of the Antibiotic Oxytetracycline in Commercial Milk by Solid-Phase Extraction: A High-Performance Liquid Chromatography (HPLC) Experiment for Quantitative Instrumental Analysis

    Science.gov (United States)

    Mei-Ratliff, Yuan

    2012-01-01

    Trace levels of oxytetracylcine spiked into commercial milk samples are extracted, cleaned up, and preconcentrated using a C[subscript 18] solid-phase extraction column. The extract is then analyzed by a high-performance liquid chromatography (HPLC) instrument equipped with a UV detector and a C[subscript 18] column (150 mm x 4.6 mm x 3.5 [mu]m).…

  1. The Impact of Data-Based Science Instruction on Standardized Test Performance

    Science.gov (United States)

    Herrington, Tia W.

    Increased teacher accountability efforts have resulted in the use of data to improve student achievement. This study addressed teachers' inconsistent use of data-driven instruction in middle school science. Evidence of the impact of data-based instruction on student achievement and school and district practices has been well documented by researchers. In science, less information has been available on teachers' use of data for classroom instruction. Drawing on data-driven decision making theory, the purpose of this study was to examine whether data-based instruction impacted performance on the science Criterion Referenced Competency Test (CRCT) and to explore the factors that impeded its use by a purposeful sample of 12 science teachers at a data-driven school. The research questions addressed in this study included understanding: (a) the association between student performance on the science portion of the CRCT and data-driven instruction professional development, (b) middle school science teachers' perception of the usefulness of data, and (c) the factors that hindered the use of data for science instruction. This study employed a mixed methods sequential explanatory design. Data collected included 8th grade CRCT data, survey responses, and individual teacher interviews. A chi-square test revealed no improvement in the CRCT scores following the implementation of professional development on data-driven instruction (chi 2 (1) = .183, p = .67). Results from surveys and interviews revealed that teachers used data to inform their instruction, indicating time as the major hindrance to their use. Implications for social change include the development of lesson plans that will empower science teachers to deliver data-based instruction and students to achieve identified academic goals.

  2. Alterations in CNS Activity Induced by Botulinum Toxin Treatment in Spasmodic Dysphonia: An H[subscript 2][superscript 15]O PET Study

    Science.gov (United States)

    Ali, S. Omar; Thomassen, Michael; Schulz, Geralyn M.; Hosey, Lara A.; Varga, Mary; Ludlow, Christy L.; Braun, Allen R.

    2006-01-01

    Speech-related changes in regional cerebral blood flow (rCBF) were measured using H[subscript 2][superscript 15]O positron-emission tomography in 9 adults with adductor spasmodic dysphonia (ADSD) before and after botulinum toxin (BTX) injection and 10 age- and gender-matched volunteers without neurological disorders. Scans were acquired at rest…

  3. Comparison of Cloud vs. Tape Backup Performance and Costs with Oracle Database

    OpenAIRE

    Zrnec, Aljaž; Lavbič, Dejan

    2011-01-01

    Current practice of backing up data is based on using backup tapes and remote locations for storing data. Nowadays, with the advent of cloud computing a new concept of database backup emerges. The paper presents the possibility of making backup copies of data in the cloud. We are mainly focused on performance and economic issues of making backups in the cloud in comparison to traditional backups. We tested the performance and overall costs of making backup copies of data in Ora...

  4. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  5. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov (United States)

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available database that allows users to objectively review and compare analysis results that are based on similar source of critically reviewed LCI data through its LCI Database Project. NREL's High-Performance

  6. Effect of Uncertainties in CO2 Property Databases on the S-CO2 Compressor Performance

    International Nuclear Information System (INIS)

    Lee, Je Kyoung; Lee, Jeong Ik; Ahn, Yoonhan; Kim, Seong Gu; Cha, Je Eun

    2013-01-01

    Various S-CO 2 Brayton cycle experiment facilities are on the state of construction or operation for demonstration of the technology. However, during the data analysis, S-CO 2 property databases are widely used to predict the performance and characteristics of S-CO 2 Brayton cycle. Thus, a reliable property database is very important before any experiment data analyses or calculation. In this paper, deviation of two different property databases which are widely used for the data analysis will be identified by using three selected properties for comparison, C p , density and enthalpy. Furthermore, effect of above mentioned deviation on the analysis of test data will be briefly discussed. From this deviation, results of the test data analysis can have critical error. As the S-CO 2 Brayton cycle researcher knows, CO 2 near the critical point has dramatic change on thermodynamic properties. Thus, it is true that a potential error source of property prediction exists in CO 2 properties near the critical point. During an experiment data analysis with the S-CO 2 Brayton cycle experiment facility, thermodynamic properties are always involved to predict the component performance and characteristics. Thus, construction or defining of precise CO 2 property database should be carried out to develop Korean S-CO 2 Brayton cycle technology

  7. Reliability database development and plant performance improvement effort at Korea Hydro and Nuclear Power Co

    International Nuclear Information System (INIS)

    Oh, S. J.; Hwang, S. W.; Na, J. H.; Lim, H. S.

    2008-01-01

    Nuclear utilities in recent years have focused on improved plant performance and equipment reliability. In U.S., there is a movement toward process integration. Examples are INPO AP-913 equipment reliability program and the standard nuclear performance model developed by NEI. Synergistic effect from an integrated approach can be far greater than as compared to individual effects from each program. In Korea, PSA for all Korean NPPs (Nuclear Power Plants) has been completed. Plant performance monitoring and improvement is an important goal for KHNP (Korea Hydro and Nuclear Power Company) and a risk monitoring system called RIMS has been developed for all nuclear plants. KHNP is in the process of voluntarily implementing maintenance rule program similar to that in U.S. In the future, KHNP would like to expand the effort to equipment reliability program and to achieve highest equipment reliability and improved plant performance. For improving equipment reliability, the current trend is moving toward preventive/predictive maintenance from corrective maintenance. With the emphasis on preventive maintenance, the failure cause and operation history and environment are important. Hence, the development of accurate reliability database is necessary. Furthermore, the database should be updated regularly and maintained as a living program to reflect the current status of equipment reliability. This paper examines the development of reliability database system and its application of maintenance optimization or Risk Informed Application (RIA). (authors)

  8. The GABA[subscript A] Receptor Agonist Muscimol Induces an Age- and Region-Dependent Form of Long-Term Depression in the Mouse Striatum

    Science.gov (United States)

    Zhang, Xiaoqun; Yao, Ning; Chergui, Karima

    2016-01-01

    Several forms of long-term depression (LTD) of glutamatergic synaptic transmission have been identified in the dorsal striatum and in the nucleus accumbens (NAc). Such experience-dependent synaptic plasticity might play important roles in reward-related learning. The GABA[subscript A] receptor agonist muscimol was recently found to trigger a…

  9. The SACADA database for human reliability and human performance

    International Nuclear Information System (INIS)

    James Chang, Y.; Bley, Dennis; Criscione, Lawrence; Kirwan, Barry; Mosleh, Ali; Madary, Todd; Nowell, Rodney; Richards, Robert; Roth, Emilie M.; Sieben, Scott; Zoulis, Antonios

    2014-01-01

    Lack of appropriate and sufficient human performance data has been identified as a key factor affecting human reliability analysis (HRA) quality especially in the estimation of human error probability (HEP). The Scenario Authoring, Characterization, and Debriefing Application (SACADA) database was developed by the U.S. Nuclear Regulatory Commission (NRC) to address this data need. An agreement between NRC and the South Texas Project Nuclear Operating Company (STPNOC) was established to support the SACADA development with aims to make the SACADA tool suitable for implementation in the nuclear power plants' operator training program to collect operator performance information. The collected data would support the STPNOC's operator training program and be shared with the NRC for improving HRA quality. This paper discusses the SACADA data taxonomy, the theoretical foundation, the prospective data to be generated from the SACADA raw data to inform human reliability and human performance, and the considerations on the use of simulator data for HRA. Each SACADA data point consists of two information segments: context and performance results. Context is a characterization of the performance challenges to task success. The performance results are the results of performing the task. The data taxonomy uses a macrocognitive functions model for the framework. At a high level, information is classified according to the macrocognitive functions of detecting the plant abnormality, understanding the abnormality, deciding the response plan, executing the response plan, and team related aspects (i.e., communication, teamwork, and supervision). The data are expected to be useful for analyzing the relations between context, error modes and error causes in human performance

  10. The high-performance database archiver for the LHC experiments

    CERN Document Server

    González-Berges, M

    2007-01-01

    Each of the Large Hadron Collider (LHC) experiments will be controlled by a large distributed system built with the Supervisory Control and Data Acquisition (SCADA) tool Prozeßvisualisierungs- und Steuerungsystem (PVSS). There will be in the order of 150 computers and one million input/output parameters per experiment. The values read from the hardware, the alarms generated and the user actions will be archived for the later physics analysis, the operation and the debugging of the control system itself. Although the original PVSS implementation of a database archiver was appropriate for standard industrial use, the performance was not sufficient for the experiments. A collaboration was setup between CERN and ETM, the company that develops PVSS. Changes in the architecture and several optimizations were made and tested in a system of a comparable size to the final ones. As a result, we have been able to improve the performance by more than one order of magnitude, and what is more important, we now have a scal...

  11. The Human Gene Mutation Database: building a comprehensive mutation repository for clinical and molecular genetics, diagnostic testing and personalized genomic medicine.

    Science.gov (United States)

    Stenson, Peter D; Mort, Matthew; Ball, Edward V; Shaw, Katy; Phillips, Andrew; Cooper, David N

    2014-01-01

    The Human Gene Mutation Database (HGMD®) is a comprehensive collection of germline mutations in nuclear genes that underlie, or are associated with, human inherited disease. By June 2013, the database contained over 141,000 different lesions detected in over 5,700 different genes, with new mutation entries currently accumulating at a rate exceeding 10,000 per annum. HGMD was originally established in 1996 for the scientific study of mutational mechanisms in human genes. However, it has since acquired a much broader utility as a central unified disease-oriented mutation repository utilized by human molecular geneticists, genome scientists, molecular biologists, clinicians and genetic counsellors as well as by those specializing in biopharmaceuticals, bioinformatics and personalized genomics. The public version of HGMD (http://www.hgmd.org) is freely available to registered users from academic institutions/non-profit organizations whilst the subscription version (HGMD Professional) is available to academic, clinical and commercial users under license via BIOBASE GmbH.

  12. Database usage and performance for the Fermilab Run II experiments

    International Nuclear Information System (INIS)

    Bonham, D.; Box, D.; Gallas, E.; Guo, Y.; Jetton, R.; Kovich, S.; Kowalkowski, J.; Kumar, A.; Litvintsev, D.; Lueking, L.; Stanfield, N.; Trumbo, J.; Vittone-Wiersma, M.; White, S.P.; Wicklund, E.; Yasuda, T.; Maksimovic, P.

    2004-01-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databases used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described

  13. YUCSA: A CLIPS expert database system to monitor academic performance

    Science.gov (United States)

    Toptsis, Anestis A.; Ho, Frankie; Leindekar, Milton; Foon, Debra Low; Carbonaro, Mike

    1991-01-01

    The York University CLIPS Student Administrator (YUCSA), an expert database system implemented in C Language Integrated Processing System (CLIPS), for monitoring the academic performance of undergraduate students at York University, is discussed. The expert system component in the system has already been implemented for two major departments, and it is under testing and enhancement for more departments. Also, more elaborate user interfaces are under development. We describe the design and implementation of the system, problems encountered, and immediate future plans. The system has excellent maintainability and it is very efficient, taking less than one minute to complete an assessment of one student.

  14. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  15. A Pictorial Visualization of Normal Mode Vibrations of the Fullerene (C[subscript 60]) Molecule in Terms of Vibrations of a Hollow Sphere

    Science.gov (United States)

    Dunn, Janette L.

    2010-01-01

    Understanding the normal mode vibrations of a molecule is important in the analysis of vibrational spectra. However, the complicated 3D motion of large molecules can be difficult to interpret. We show how images of normal modes of the fullerene molecule C[subscript 60] can be made easier to understand by superimposing them on images of the normal…

  16. PI[subscript 3]-Kinase Cascade Has a Differential Role in Acquisition and Extinction of Conditioned Fear Memory in Juvenile and Adult Rats

    Science.gov (United States)

    Slouzkey, Ilana; Maroun, Mouna

    2016-01-01

    The basolateral amygdala (BLA), medial prefrontal cortex (mPFC) circuit, plays a crucial role in acquisition and extinction of fear memory. Extinction of aversive memories is mediated, at least in part, by the phosphoinositide-3 kinase (P[subscript 3]K)/Akt pathway in adult rats. There is recent interest in the neural mechanisms that mediate fear…

  17. Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance

    Science.gov (United States)

    Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan

    2016-03-01

    Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.

  18. Performance of Point and Range Queries for In-memory Databases using Radix Trees on GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Maksudul [ORNL; Yoginath, Srikanth B [ORNL; Perumalla, Kalyan S [ORNL

    2016-01-01

    In in-memory database systems augmented by hardware accelerators, accelerating the index searching operations can greatly increase the runtime performance of database queries. Recently, adaptive radix trees (ART) have been shown to provide very fast index search implementation on the CPU. Here, we focus on an accelerator-based implementation of ART. We present a detailed performance study of our GPU-based adaptive radix tree (GRT) implementation over a variety of key distributions, synthetic benchmarks, and actual keys from music and book data sets. The performance is also compared with other index-searching schemes on the GPU. GRT on modern GPUs achieves some of the highest rates of index searches reported in the literature. For point queries, a throughput of up to 106 million and 130 million lookups per second is achieved for sparse and dense keys, respectively. For range queries, GRT yields 600 million and 1000 million lookups per second for sparse and dense keys, respectively, on a large dataset of 64 million 32-bit keys.

  19. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  20. A database for CO2 Separation Performances of MOFs based on Computational Materials Screening.

    Science.gov (United States)

    Altintas, Cigdem; Avci, Gokay; Daglar, Hilal; Nemati Vesali Azar, Ayda; Velioglu, Sadiye; Erucar, Ilknur; Keskin, Seda

    2018-05-03

    Metal organic frameworks (MOFs) have been considered as great candidates for CO2 capture. Considering the very large number of available MOFs, high-throughput computational screening plays a critical role in identifying the top performing materials for target applications in a time-effective manner. In this work, we used molecular simulations to screen the most recent and complete MOF database for identifying the most promising materials for CO2 separation from flue gas (CO2/N2) and landfill gas (CO2/CH4) under realistic operating conditions. We first validated our approach by comparing the results of our molecular simulations for the CO2 uptakes, CO2/N2 and CO2/CH4 selectivities of various types of MOFs with the available experimental data. We then computed binary CO2/N2 and CO2/CH4 mixture adsorption data for the entire MOF database and used these results to calculate several adsorbent selection metrics such as selectivity, working capacity, adsorbent performance score, regenerability, and separation potential. MOFs were ranked based on the combination of these metrics and the top performing MOF adsorbents that can achieve CO2/N2 and CO2/CH4 separations with high performance were identified. Molecular simulations for the adsorption of a ternary CO2/N2/CH4 mixture were performed for these top materials in order to provide a more realistic performance assessment of MOF adsorbents. Structure-performance analysis showed that MOFs with ΔQ>30 kJ/mol, 3.8 A≤PLD≤5 A, 5 A≤LCD≤7.5 A, 0.5≤ϕ≤0.75, SA≤1,000 m2/g, ρ>1 g/cm 3 are the best candidates for selective separation of CO2 from flue gas and landfill gas. This information will be very useful to design novel MOFs with the desired structural features that can lead to high CO2 separation potentials. Finally, an online, freely accessible database https://cosmoserc.ku.edu.tr was established, for the first time in the literature, which reports all computed adsorbent metrics of 3,816 MOFs for CO2/N2, CO2/CH4

  1. The Effects of the Rope Jump Training Program in Physical Education Lessons on Strength, Speed and VO[subscript 2] Max in Children

    Science.gov (United States)

    Eler, Nebahat; Acar, Hakan

    2018-01-01

    The aim of this study is to examine the effects of rope-jump training program in physical education lessons on strength, speed and VO[subscript 2] max in 10-12 year old boys. 240 male students; rope-jump group (n = 120) and control group (n = 120) participated in the study. Rope-Jump group continued 10 weeks of regular physical education and sport…

  2. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  3. The influence of the negative-positive ratio and screening database size on the performance of machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Bojarski, Andrzej J

    2017-01-01

    The machine learning-based virtual screening of molecular databases is a commonly used approach to identify hits. However, many aspects associated with training predictive models can influence the final performance and, consequently, the number of hits found. Thus, we performed a systematic study of the simultaneous influence of the proportion of negatives to positives in the testing set, the size of screening databases and the type of molecular representations on the effectiveness of classification. The results obtained for eight protein targets, five machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest), two types of molecular fingerprints (MACCS and CDK FP) and eight screening databases with different numbers of molecules confirmed our previous findings that increases in the ratio of negative to positive training instances greatly influenced most of the investigated parameters of the ML methods in simulated virtual screening experiments. However, the performance of screening was shown to also be highly dependent on the molecular library dimension. Generally, with the increasing size of the screened database, the optimal training ratio also increased, and this ratio can be rationalized using the proposed cost-effectiveness threshold approach. To increase the performance of machine learning-based virtual screening, the training set should be constructed in a way that considers the size of the screening database.

  4. ISSUES IN MOBILE DISTRIBUTED REAL TIME DATABASES: PERFORMANCE AND REVIEW

    OpenAIRE

    VISHNU SWAROOP,; Gyanendra Kumar Gupta,; UDAI SHANKER

    2011-01-01

    Increase in handy and small electronic devices in computing fields; it makes the computing more popularand useful in business. Tremendous advances in wireless networks and portable computing devices have led to development of mobile computing. Support of real time database system depending upon thetiming constraints, due to availability of data distributed database, and ubiquitous computing pull the mobile database concept, which emerges them in a new form of technology as mobile distributed ...

  5. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  6. Accelerating the energy retrofit of commercial buildings using a database of energy efficiency performance

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Hong, Tianzhen; Piette, Mary Ann; Sawaya, Geof; Chen, Yixing; Taylor-Lange, Sarah C.

    2015-01-01

    Small and medium-sized commercial buildings can be retrofitted to significantly reduce their energy use, however it is a huge challenge as owners usually lack of the expertise and resources to conduct detailed on-site energy audit to identify and evaluate cost-effective energy technologies. This study presents a DEEP (database of energy efficiency performance) that provides a direct resource for quick retrofit analysis of commercial buildings. DEEP, compiled from the results of about ten million EnergyPlus simulations, enables an easy screening of ECMs (energy conservation measures) and retrofit analysis. The simulations utilize prototype models representative of small and mid-size offices and retails in California climates. In the formulation of DEEP, large scale EnergyPlus simulations were conducted on high performance computing clusters to evaluate hundreds of individual and packaged ECMs covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and service hot water. The architecture and simulation environment to create DEEP is flexible and can expand to cover additional building types, additional climates, and new ECMs. In this study DEEP is integrated into a web-based retrofit toolkit, the Commercial Building Energy Saver, which provides a platform for energy retrofit decision making by querying DEEP and unearthing recommended ECMs, their estimated energy savings and financial payback. - Highlights: • A DEEP (database of energy efficiency performance) supports building retrofit. • DEEP is an SQL database with pre-simulated results from 10 million EnergyPlus runs. • DEEP covers 7 building types, 6 vintages, 16 climates, and 100 energy measures. • DEEP accelerates retrofit of small commercial buildings to save energy use and cost. • DEEP can be expanded and integrated with third-party energy software tools.

  7. Nuclear materials thermo-physical property database and property analysis using the database

    International Nuclear Information System (INIS)

    Jeong, Yeong Seok

    2002-02-01

    It is necessary that thermo-physical properties and understand of nuclear materials for evaluation and analysis to steady and accident states of commercial and research reactor. In this study, development of nuclear materials thermo-properties database and home page. In application of this database, it is analyzed of thermal conductivity, heat capacity, enthalpy, and linear thermal expansion of fuel and cladding material and compared thermo-properties model in nuclear fuel performance evaluation codes with experimental data in database. Results of compare thermo-property model of UO 2 fuel and cladding major performance evaluation code, both are similar

  8. Peer Review Quality and Transparency of the Peer-Review Process in Open Access and Subscription Journals.

    Science.gov (United States)

    Wicherts, Jelte M

    2016-01-01

    Recent controversies highlighting substandard peer review in Open Access (OA) and traditional (subscription) journals have increased the need for authors, funders, publishers, and institutions to assure quality of peer-review in academic journals. I propose that transparency of the peer-review process may be seen as an indicator of the quality of peer-review, and develop and validate a tool enabling different stakeholders to assess transparency of the peer-review process. Based on editorial guidelines and best practices, I developed a 14-item tool to rate transparency of the peer-review process on the basis of journals' websites. In Study 1, a random sample of 231 authors of papers in 92 subscription journals in different fields rated transparency of the journals that published their work. Authors' ratings of the transparency were positively associated with quality of the peer-review process but unrelated to journal's impact factors. In Study 2, 20 experts on OA publishing assessed the transparency of established (non-OA) journals, OA journals categorized as being published by potential predatory publishers, and journals from the Directory of Open Access Journals (DOAJ). Results show high reliability across items (α = .91) and sufficient reliability across raters. Ratings differentiated the three types of journals well. In Study 3, academic librarians rated a random sample of 140 DOAJ journals and another 54 journals that had received a hoax paper written by Bohannon to test peer-review quality. Journals with higher transparency ratings were less likely to accept the flawed paper and showed higher impact as measured by the h5 index from Google Scholar. The tool to assess transparency of the peer-review process at academic journals shows promising reliability and validity. The transparency of the peer-review process can be seen as an indicator of peer-review quality allowing the tool to be used to predict academic quality in new journals.

  9. The shortest path algorithm performance comparison in graph and relational database on a transportation network

    Directory of Open Access Journals (Sweden)

    Mario Miler

    2014-02-01

    Full Text Available In the field of geoinformation and transportation science, the shortest path is calculated on graph data mostly found in road and transportation networks. This data is often stored in various database systems. Many applications dealing with transportation network require calculation of the shortest path. The objective of this research is to compare the performance of Dijkstra shortest path calculation in PostgreSQL (with pgRouting and Neo4j graph database for the purpose of determining if there is any difference regarding the speed of the calculation. Benchmarking was done on commodity hardware using OpenStreetMap road network. The first assumption is that Neo4j graph database would be well suited for the shortest path calculation on transportation networks but this does not come without some cost. Memory proved to be an issue in Neo4j setup when dealing with larger transportation networks.

  10. Informing Evidence Based Decisions: Usage Statistics for Online Journal Databases

    Directory of Open Access Journals (Sweden)

    Alexei Botchkarev

    2017-06-01

    Full Text Available Abstract Objective – The primary objective was to examine online journal database usage statistics for a provincial ministry of health in the context of evidence based decision-making. In addition, the study highlights implementation of the Journal Access Centre (JAC that is housed and powered by the Ontario Ministry of Health and Long-Term Care (MOHLTC to inform health systems policy-making. Methods – This was a prospective case study using descriptive analysis of the JAC usage statistics of journal articles from January 2009 to September 2013. Results – JAC enables ministry employees to access approximately 12,000 journals with full-text articles. JAC usage statistics for the 2011-2012 calendar years demonstrate a steady level of activity in terms of searches, with monthly averages of 5,129. In 2009-2013, a total of 4,759 journal titles were accessed including 1,675 journals with full-text. Usage statistics demonstrate that the actual consumption was over 12,790 full-text downloaded articles or approximately 2,700 articles annually. Conclusion – JAC’s steady level of activities, revealed by the study, reflects continuous demand for JAC services and products. It testifies that access to online journal databases has become part of routine government knowledge management processes. MOHLTC’s broad area of responsibilities with dynamically changing priorities translates into the diverse information needs of its employees and a large set of required journals. Usage statistics indicate that MOHLTC information needs cannot be mapped to a reasonably compact set of “core” journals with a subsequent subscription to those.

  11. News from the Library: Scientific Information Service - service interruption

    CERN Multimedia

    2014-01-01

    Techniques de l'Ingénieur has been part of the selection of databases offered by the Scientific Information Service for the last five years.   Unfortunately, as a consequence of budget reductions, and after careful consideration of all available options, we have to end this subscription. It will be still possible to purchase access to individual chapters via the Library services.  Furthermore, we are considering ending our subscriptions to Web of Science and Springer Materials (the Landolt-Börnstein database) during the course of 2015. We thank you for your understanding and welcome your feedback to library.desk@cern.ch

  12. The Danish fetal medicine database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte Kvist; Kopp, Tine Iskov; Tabor, Ann

    2016-01-01

    trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units’Astraia databases to the central database via...... analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database...

  13. Database characterisation of HEP applications

    International Nuclear Information System (INIS)

    Piorkowski, Mariusz; Grancher, Eric; Topurov, Anton

    2012-01-01

    Oracle-based database applications underpin many key aspects of operations for both the LHC accelerator and the LHC experiments. In addition to the overall performance, the predictability of the response is a key requirement to ensure smooth operations and delivering predictability requires understanding the applications from the ground up. Fortunately, database management systems provide several tools to check, measure, analyse and gather useful information. We present our experiences characterising the performance of several typical HEP database applications performance characterisations that were used to deliver improved predictability and scalability as well as for optimising the hardware platform choice as we migrated to new hardware and Oracle 11g.

  14. Applicability of thermodynamic database of radioactive elements developed for the Japanese performance assessment of HLW repository

    International Nuclear Information System (INIS)

    Yui, Mikazu; Shibata, Masahiro; Rai, Dhanpat; Ochs, Michael

    2003-01-01

    In 1999 Japan Nuclear Cycle Development Institute (JNC) published a second progress report (also known as H12 report) on high-level radioactive waste (HLW) disposal in Japan (JNC 1999). This report helped to develop confidence in the selected HLW disposal system and to establish the implementation body in 2000 for the disposal of HLW. JNC developed an in-house thermodynamic database for radioactive elements for performance analysis of the engineered barrier system (EBS) and the geosphere for H12 report. This paper briefly presents the status of the JNC's thermodynamic database and its applicability to perform realistic analyses of the solubilities of radioactive elements, evolution of solubility-limiting solid phases, predictions of the redox state of Pu in the neutral pH range under reducing conditions, and to estimate solubilities of radioactive elements in cementitious conditions. (author)

  15. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  16. Operation and Performance of the ATLAS Muon Spectrometer Databases during 2011-12 Data Taking

    CERN Document Server

    Verducci, Monica

    2014-01-01

    The size and complexity of the ATLAS experiment at the Large Hadron Collider, including its Muon Spectrometer, raise unprecedented challenges in terms of operation, software model and data management. One of the challenging tasks is the storage of non-event data produced by the calibration and alignment stream processes and by online and offline monitoring frameworks, which can unveil problems in the detector hardware and in the data processing chain. During 2011 and 2012 data taking, the software model and data processing enabled high quality track resolution as a better understanding of the detector performance was developed using the most reliable detector simulation and reconstruction. This work summarises the various aspects of the Muon Spectrometer Databases, with particular emphasis given to the Conditions Databases and their usage in the data analysis.

  17. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  18. Mental Toughness Moderates Social Loafing in Cycle Time-Trial Performance

    Science.gov (United States)

    Haugen, Tommy; Reinboth, Michael; Hetlelid, Ken J.; Peters, Derek M.; Høigaard, Rune

    2016-01-01

    Purpose: The purpose of this study was to determine if mental toughness moderated the occurrence of social loafing in cycle time-trial performance. Method: Twenty-seven men (M[subscript age] = 17.7 years, SD = 0.6) completed the Sport Mental Toughness Questionnaire prior to completing a 1-min cycling trial under 2 conditions: once with individual…

  19. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  20. Economics of access versus ownership the costs and benefits of access to scholarly articles via interlibrary loan and journal subscriptions

    CERN Document Server

    Kingma, Bruce

    2013-01-01

    The Economics of Access Versus Ownership offers library professionals a model economic analysis of providing access to journal articles through interlibrary loan as compared to library subscriptions to the journals. This model enables library directors to do an economic analysis of interlibrary loan and collection development in their own libraries and to then make cost-efficient decisions about the use of these services.This practical book's analysis and conclusions are based on 1994/95 academic year research conducted by the State University of New York libraries at Albany, Binghamton, Buffa

  1. Supply Chain Initiatives Database

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  2. MTF Database: A Repository of Students' Academic Performance Measurements for the Development of Techniques for Evaluating Team Functioning

    Science.gov (United States)

    Hsiung, Chin-Min; Zheng, Xiang-Xiang

    2015-01-01

    The Measurements for Team Functioning (MTF) database contains a series of student academic performance measurements obtained at a national university in Taiwan. The measurements are acquired from unit tests and homework tests performed during a core mechanical engineering course, and provide an objective means of assessing the functioning of…

  3. Database Replication Prototype

    OpenAIRE

    Vandewall, R.

    2000-01-01

    This report describes the design of a Replication Framework that facilitates the implementation and com-parison of database replication techniques. Furthermore, it discusses the implementation of a Database Replication Prototype and compares the performance measurements of two replication techniques based on the Atomic Broadcast communication primitive: pessimistic active replication and optimistic active replication. The main contributions of this report can be split into four parts....

  4. Aerobic Fitness and Cognitive Development: Event-Related Brain Potential and Task Performance Indices of Executive Control in Preadolescent Children

    Science.gov (United States)

    Hillman, Charles H.; Buck, Sarah M.; Themanson, Jason R.; Pontifex, Matthew B.; Castelli, Darla M.

    2009-01-01

    The relationship between aerobic fitness and executive control was assessed in 38 higher- and lower-fit children (M[subscript age] = 9.4 years), grouped according to their performance on a field test of aerobic capacity. Participants performed a flanker task requiring variable amounts of executive control while event-related brain potential…

  5. Treatment performances of French constructed wetlands: results from a database collected over the last 30 years.

    Science.gov (United States)

    Morvannou, A; Forquet, N; Michel, S; Troesch, S; Molle, P

    2015-01-01

    Approximately 3,500 constructed wetlands (CWs) provide raw wastewater treatment in France for small communities (Built during the past 30 years, most consist of two vertical flow constructed wetlands (VFCWs) in series (stages). Many configurations exist, with systems associated with horizontal flow filters or waste stabilization ponds, vertical flow with recirculation, partially saturated systems, etc. A database analyzed 10 years earlier on the classical French system summarized the global performances data. This paper provides a similar analysis of performance data from 415 full-scale two-stage VFCWs from an improved database expanded by monitoring data available from Irstea and the French technical department. Trends presented in the first study are confirmed, exhibiting high chemical oxygen demand (COD), total suspended solids (TSS) and total Kjeldahl nitrogen (TKN) removal rates (87%, 93% and 84%, respectively). Typical concentrations at the second-stage outlet are 74 mgCOD L(-1), 17 mgTSS L(-1) and 11 mgTKN L(-1). Pollutant removal performances are summarized in relation to the loads applied at the first treatment stage. While COD and TSS removal rates remain stable over the range of applied loads, the spreading of TKN removal rates increases as applied loads increase.

  6. OCA Oracle Database 11g database administration I : a real-world certification guide

    CERN Document Server

    Ries, Steve

    2013-01-01

    Developed as a practical book, ""Oracle Database 11g Administration I Certification Guide"" will show you all you need to know to effectively excel at being an Oracle DBA, for both examinations and the real world. This book is for anyone who needs the essential skills to become an Oracle DBA, pass the Oracle Database Administration I exam, and use those skills in the real world to manage secure, high performance, and highly available Oracle databases.

  7. On the use of databases about research performance

    NARCIS (Netherlands)

    Rodela, Romina

    2016-01-01

    The accuracy of interdisciplinarity measurements depends on how well the data is used for this purpose and whether it can meaningfully inform about work that crosses disciplinary domains. At present, there are no ad hoc databases compiling information only and exclusively about interdisciplinary

  8. Conceptual considerations for CBM databases

    Energy Technology Data Exchange (ETDEWEB)

    Akishina, E. P.; Aleksandrov, E. I.; Aleksandrov, I. N.; Filozova, I. A.; Ivanov, V. V.; Zrelov, P. V. [Lab. of Information Technologies, JINR, Dubna (Russian Federation); Friese, V.; Mueller, W. [GSI, Darmstadt (Germany)

    2014-07-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  9. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    Akishina, E.P.; Aleksandrov, E.I.; Aleksandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Zrelov, P.V.; Friese, V.; Mueller, W.

    2014-01-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  10. Performance analysis of a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with Optimistic Concurrency Control (OCC), an approximation for the transaction response-time distribution and thus for the deadline miss probability is obtained. Transactions arrive at the database according to a Poisson process. There is a limited number of

  11. Responsibility Towards The Customers Of Subscription-Based Software Solutions In The Context Of Using The Cloud Computing Technology

    Directory of Open Access Journals (Sweden)

    Bogdan Ștefan Ionescu

    2003-12-01

    Full Text Available The continuously transformation of the contemporary society and IT environment circumscribed its informational has led to the emergence of the cloud computing technology that provides the access to infrastructure and subscription-based software services, as well. In the context of a growing number of service providers with of cloud software, the paper aims to identify the perception of some current or potential users of the cloud solution, selected from among students enrolled in the accounting (professional or research master programs with the profile organized by the Bucharest University of Economic Studies, in terms of their expectations for cloud services, as well as the extent to which the SaaS providers are responsible for the provided services.

  12. Capacity subscription and its market design impact

    International Nuclear Information System (INIS)

    Doorman, Gerard; Solem, Gerd

    2005-04-01

    Capacity Subscription (CS) implies that consumers buy (subscribe to) a certain amount of capacity. Their demand is limited to this capacity when the total power system is short of capacity and the System Operator activates controllable Load Limiting Devices (LLDs). The objective is to maintain system security by avoiding involuntary load shedding. The report describes a market design with CS. As a case study, an analysis is made of the changes in the market design of the Nordic system that would be necessary to implement CS. First the present Nordic market design is described. Focus is on the various market participants, their roles within various time horizons and their interactions. So it is described how CS works, why it works and what is necessary to make it work. Subsequently the necessary changes in the Nordic market structure are described. The major changes are the installation of the LLDs, the establishment of the necessary infrastructure to control the LLDs and the rules governing their control and the establishment of a capacity market. The major rule is that the System Operator announces LLD activation when a shortage situation is expected. In the capacity market generators offer available capacity during system peak conditions, while consumers bid their need for capacity. Market participants are the same as on the spot market, while small consumers buy through retailer. Generators are obliged to offer the capacity sold on the capacity market on the spot market during LLD activation. Failure to do so results in a penalty payment. The report further discusses issues like the need for verification procedures, import and export, generation pooling, the handling of small consumers, reserves and a possible implementation path of CS. With respect to transmission constraints it is argued that market splitting can be a viable option. It is concluded that CS can be a possible solution to maintain generation adequacy, but there are some serious challenges. The

  13. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    , complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...

  14. Psychophysical studies of the performance of an image database retrieval system

    Science.gov (United States)

    Papathomas, Thomas V.; Conway, Tiffany E.; Cox, Ingemar J.; Ghosn, Joumana; Miller, Matt L.; Minka, Thomas P.; Yianilos, Peter N.

    1998-07-01

    We describe psychophysical experiments conducted to study PicHunter, a content-based image retrieval (CBIR) system. Experiment 1 studies the importance of using (a) semantic information, (2) memory of earlier input and (3) relative, rather than absolute, judgements of image similarity. The target testing paradigm is used in which a user must search for an image identical to a target. We find that the best performance comes from a version of PicHunter that uses only semantic cues, with memory and relative similarity judgements. Second best is use of both pictorial and semantic cues, with memory and relative similarity judgements. Most reports of CBIR systems provide only qualitative measures of performance based on how similar retrieved images are to a target. Experiment 2 puts PicHunter into this context with a more rigorous test. We first establish a baseline for our database by measuring the time required to find an image that is similar to a target when the images are presented in random order. Although PicHunter's performance is measurably better than this, the test is weak because even random presentation of images yields reasonably short search times. This casts doubt on the strength of results given in other reports where no baseline is established.

  15. The Danish Fetal Medicine Database

    Directory of Open Access Journals (Sweden)

    Ekelund CK

    2016-10-01

    Full Text Available Charlotte Kvist Ekelund,1 Tine Iskov Kopp,2 Ann Tabor,1 Olav Bjørn Petersen3 1Department of Obstetrics, Center of Fetal Medicine, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark; 2Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup, Denmark; 3Fetal Medicine Unit, Aarhus University Hospital, Aarhus Nord, Denmark Aim: The aim of this study is to set up a database in order to monitor the detection rates and false-positive rates of first-trimester screening for chromosomal abnormalities and prenatal detection rates of fetal malformations in Denmark. Study population: Pregnant women with a first or second trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units' Astraia databases to the central database via web service. Information about outcome of pregnancy (miscarriage, termination, live birth, or stillbirth is received from the National Patient Register and National Birth Register and linked via the Danish unique personal registration number. Furthermore, results of all pre- and postnatal chromosome analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database is valuable to assess the performance at a regional level and to compare Danish performance with international results at a national level. Keywords: prenatal screening, nuchal translucency, fetal malformations, chromosomal abnormalities

  16. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  17. Preparation and Characterization of a Polymeric Monolithic Column for Use in High-Performance Liquid Chromatography (HPLC)

    Science.gov (United States)

    Bindis, Michael P.; Bretz, Stacey Lowery; Danielson, Neil D.

    2011-01-01

    The high-performance liquid chromatography (HPLC) experiment, most often done in the undergraduate analytical instrumentation laboratory course, generally illustrates reversed-phase chromatography using a commercial C[subscript]18 silica column. To avoid the expense of periodic column replacement and introduce a choice of columns with different…

  18. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    Science.gov (United States)

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  19. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  20. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    Directory of Open Access Journals (Sweden)

    Jordi Vilaplana

    2014-01-01

    Full Text Available Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (reannotation of proteomes, to properly identify both the individual proteins involved in the process(es of interest and their function. It also enables the sets of proteins involved in the process(es in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  1. Database constraints applied to metabolic pathway reconstruction tools.

    Science.gov (United States)

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  2. Comparison of the Frontier Distributed Database Caching System with NoSQL Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Non-relational "NoSQL" databases such as Cassandra and CouchDB are best known for their ability to scale to large numbers of clients spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects, is based on traditional SQL databases but also has the same high scalability and wide-area distributability for an important subset of applications. This paper compares the architectures, behavior, performance, and maintainability of the two different approaches and identifies the criteria for choosing which approach to prefer over the other.

  3. The MAR databases: development and implementation of databases specific for marine metagenomics.

    Science.gov (United States)

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. The Danish Testicular Cancer database

    DEFF Research Database (Denmark)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel

    2016-01-01

    AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...... collection has been performed from 1984 to 2007 and from 2013 onward, respectively. MAIN VARIABLES AND DESCRIPTIVE DATA: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function...

  5. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  6. Motion database of disguised and non-disguised team handball penalty throws by novice and expert performers

    Directory of Open Access Journals (Sweden)

    Fabian Helm

    2017-12-01

    Full Text Available This article describes the motion database for a large sample (n = 2400 of 7-m penalty throws in team handball that includes 1600 disguised throws. Throws were performed by both novice (n = 5 and expert (n = 5 penalty takers. The article reports the methods and materials used to capture the motion data. The database itself is accessible for download via JLU Web Server and provides all raw files in a three-dimensional motion data format (.c3d. Additional information is given on the marker placement of the penalty taker, goalkeeper, and ball together with details on the skill level and/or playing history of the expert group. The database was first used by Helm et al. (2017 [1] to investigate the kinematic patterns of disguised movements. Results of this analysis are reported and discussed in their article “Kinematic patterns underlying disguised movements: Spatial and temporal dissimilarity compared to genuine movement patterns” (doi:10.1016/j.humov.2017.05.010 [1]. Keywords: Motion capture data, Disguise, Expertise

  7. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    Science.gov (United States)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  8. International Nuclear Safety Center (INSC) database

    International Nuclear Information System (INIS)

    Sofu, T.; Ley, H.; Turski, R.B.

    1997-01-01

    As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables

  9. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  10. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  11. Database on wind characteristics - Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in detailsfor the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving...

  12. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  13. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs

  14. Mapping mHealth (mobile health) and mobile penetrations in sub-Saharan Africa for strategic regional collaboration in mHealth scale-up: an application of exploratory spatial data analysis.

    Science.gov (United States)

    Lee, Seohyun; Cho, Yoon-Min; Kim, Sun-Young

    2017-08-22

    Mobile health (mHealth), a term used for healthcare delivery via mobile devices, has gained attention as an innovative technology for better access to healthcare and support for performance of health workers in the global health context. Despite large expansion of mHealth across sub-Saharan Africa, regional collaboration for scale-up has not made progress since last decade. As a groundwork for strategic planning for regional collaboration, the study attempted to identify spatial patterns of mHealth implementation in sub-Saharan Africa using an exploratory spatial data analysis. In order to obtain comprehensive data on the total number of mHelath programs implemented between 2006 and 2016 in each of the 48 sub-Saharan Africa countries, we performed a systematic data collection from various sources, including: the WHO eHealth Database, the World Bank Projects & Operations Database, and the USAID mHealth Database. Additional spatial analysis was performed for mobile cellular subscriptions per 100 people to suggest strategic regional collaboration for improving mobile penetration rates along with the mHealth initiative. Global Moran's I and Local Indicator of Spatial Association (LISA) were calculated for mHealth programs and mobile subscriptions per 100 population to investigate spatial autocorrelation, which indicates the presence of local clustering and spatial disparities. From our systematic data collection, the total number of mHealth programs implemented in sub-Saharan Africa between 2006 and 2016 was 487 (same programs implemented in multiple countries were counted separately). Of these, the eastern region with 17 countries and the western region with 16 countries had 287 and 145 mHealth programs, respectively. Despite low levels of global autocorrelation, LISA enabled us to detect meaningful local clusters. Overall, the eastern part of sub-Saharan Africa shows high-high association for mHealth programs. As for mobile subscription rates per 100 population, the

  15. Influences of Developmental Contexts and Gender Differences on School Performance of Children and Adolescents

    Science.gov (United States)

    Diniz, Eva; da Rosa Piccolo, Luciane; de Paula Couto, Maria Clara Pinheiro; Salles, Jerusa Fumagalli; Helena Koller, Silvia

    2014-01-01

    This study investigated children and adolescents' school performance over time focusing on two variables that may influence it: developmental context and gender. The sample comprised 627 participants (M[subscript age]?=?11.13, SD?=?1.8), 51% of them female, from grade one to eight, living either with family (n?=?474) or in care institutions…

  16. Suscripción de revistas en línea en la Argentina, 2007 On line journals subscription in Argentina, 2007

    Directory of Open Access Journals (Sweden)

    Susana Romanos de Tiratel

    2011-12-01

    Full Text Available El artículo presenta los resultados de una investigación orientada a describir aspectos del proceso de suscripción de revistas en línea en la Argentina. Entre abril y setiembre del año 2007 se aplicó una encuesta vía correo electrónico a bibliotecas especializadas y universitarias argentinas para indagar acerca de los actores involucrados en la identificación, evaluación y selección de títulos, en la negociación y firma de las licencias, las modalidades de suscripción y los tipos de recursos suscriptos por esas unidades de información. Se observó una alta participación de los bibliotecarios en todo el proceso que disminuía en la etapa de negociación y firma de los acuerdos de licencia donde los directores institucionales y los administradores tenían un rol más destacado.This paper presents the findings of an investigation aimed at describing aspects of the process of online journals subscription in Argentina. From April to September of 2007 a survey was conducted via electronic mail to special and academic libraries to inquire about Argentine actors involved in the identification, evaluation, and selection of titles, in the negotiation and signing of the licences, the subscription methods, and type of resources subscribed for those information units. There was a high participation of librarians in the process that decreased in the stage of negotiating and signing the licencing agreements where institutional managers and administrators had a more prominent role.

  17. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  18. Dansk Hjerteregister--en klinisk database

    DEFF Research Database (Denmark)

    Abildstrøm, Steen Zabell; Kruse, Marie; Rasmussen, Søren

    2008-01-01

    INTRODUCTION: The Danish Heart Registry (DHR) keeps track of all coronary angiographies (CATH), percutaneous coronary interventions (PCI), coronary artery bypass grafting (CABG), and adult heart valve surgery performed in Denmark. DHR is a clinical database established in order to follow the acti......INTRODUCTION: The Danish Heart Registry (DHR) keeps track of all coronary angiographies (CATH), percutaneous coronary interventions (PCI), coronary artery bypass grafting (CABG), and adult heart valve surgery performed in Denmark. DHR is a clinical database established in order to follow...

  19. JAEA thermodynamic database for performance assessment of geological disposal of high-level and TRU wastes. Selection of thermodynamic data of selenium

    International Nuclear Information System (INIS)

    Doi, Reisuke; Kitamura, Akira; Yui, Mikazu

    2010-02-01

    Within the scope of the JAEA thermodynamic database project for performance assessment of geological disposal of high-level and TRU radioactive wastes, the selection of the thermodynamic data on the inorganic compounds and complexes of selenium was carried out. Selection of thermodynamic data of selenium was based on a thermodynamic database of selenium published by the Nuclear Energy Agency in the Organisation for Economic Co-operation and Development (OECD/NEA). The remarks of a thermodynamic database by OECD/NEA found by the authors were noted in this report and then thermodynamic data was reviewed after surveying latest literatures. Some thermodynamic values of iron selenides were not selected by the OECD/NEA due to low reliability. But they were important for the performance assessment of geological disposal of radioactive wastes, so we selected them as a tentative value with specifying reliability and needs of the value to be determined. (author)

  20. Summary of earthquake experience database

    International Nuclear Information System (INIS)

    1999-01-01

    Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes

  1. Assessing U.S. ESCO industry performance and market trends: Results from the NAESCO database project

    International Nuclear Information System (INIS)

    Osborn, Julie; Goldman, Chuck; Hopper, Nicole; Singer, Terry

    2002-01-01

    The U.S. Energy Services Company (ESCO) industry is often cited as the most successful model for the private sector delivery of energy-efficiency services. This study documents actual performance of the ESCO industry in order to provide policymakers and investors with objective informative and customers with a resource for benchmarking proposed projects relative to industry performance. We have assembled a database of nearly 1500 case studies of energy-efficiency projects - the most comprehensive data set of the U.S. ESCO industry available. These projects include$2.55B of work completed by 51 ESCOs and span much of the history of this industry

  2. Diet History Questionnaire: Database Revision History

    Science.gov (United States)

    The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.

  3. Liver biopsy performance and histological findings among patients with chronic viral hepatitis: a Danish database study

    DEFF Research Database (Denmark)

    Christensen, Peer Brehm; Krarup, Henrik Bygum; Møller, Axel

    2007-01-01

    We investigated the variance of liver biopsy frequency and histological findings among patients with chronic viral hepatitis attending 10 medical centres in Denmark. Patients who tested positive for HBsAg or HCV- RNA were retrieved from a national clinical database (DANHEP) and demographic data...... had developed in 23% after 20 y of infection. Age above 40 y was a better predictor of cirrhosis than elevated ALT. National database comparison may identify factors of importance for improved management of patients with chronic viral hepatitis. Udgivelsesdato: 2007-null......, laboratory analyses and liver biopsy results were collected. A total of 1586 patients were identified of whom 69.7% had hepatitis C, 28.9% hepatitis B, and 1.5% were coinfected. In total, 771 (48.6%) had a biopsy performed (range 33.3-78.7%). According to the Metavir classification, 29.3% had septal fibrosis...

  4. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  5. Group Theory and Crystal Field Theory: A Simple and Rigorous Derivation of the Spectroscopic Terms Generated by the t[subscript 2g][superscript 2] Electronic Configuration in a Strong Octahedral Field

    Science.gov (United States)

    Morpurgo, Simone

    2007-01-01

    The principles of symmetry and group theory are applied to the zero-order wavefunctions associated with the strong-field t[subscript 2g][superscript 2] configuration and their symmetry-adapted linear combinations (SALC) associated with the generated energy terms are derived. This approach will enable students to better understand the use of…

  6. Performance Assessment of Dynaspeak Speech Recognition System on Inflight Databases

    National Research Council Canada - National Science Library

    Barry, Timothy

    2004-01-01

    .... To aid in the assessment of various commercially available speech recognition systems, several aircraft speech databases have been developed at the Air Force Research Laboratory's Human Effectiveness Directorate...

  7. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  8. Effects of spatial location and household wealth on health insurance subscription among women in Ghana.

    Science.gov (United States)

    Kumi-Kyereme, Akwasi; Amo-Adjei, Joshua

    2013-06-17

    This study compares ownership of health insurance among Ghanaian women with respect to wealth status and spatial location. We explore the overarching research question by employing geographic and proxy means targeting through interactive analysis of wealth status and spatial issues. The paper draws on the 2008 Ghana Demographic and Health Survey. Bivariate descriptive analysis coupled with binary logistic regression estimation technique was used to analyse the data. By wealth status, the likelihood of purchasing insurance was significantly higher among respondents from the middle, richer and richest households compared to the poorest (reference category) and these differences widened more profoundly in the Northern areas after interacting wealth with zone of residence. Among women at the bottom of household wealth (poorest and poorer), there were no statistically significant differences in insurance subscription in all the areas. The results underscore the relevance of geographic and proxy means targeting in identifying populations who may be need of special interventions as part of the efforts to increase enrolment as well as means of social protection against the vulnerable.

  9. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  10. Database computing in HEP

    International Nuclear Information System (INIS)

    Day, C.T.; Loken, S.; MacFarlane, J.F.; May, E.; Lifka, D.; Lusk, E.; Price, L.E.; Baden, A.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples

  11. ZAGRADA - A New Radiocarbon Database

    International Nuclear Information System (INIS)

    Portner, A.; Obelic, B.; Krajcar Bornic, I.

    2008-01-01

    In the Radiocarbon and Tritium Laboratory at the Rudjer Boskovic Institute three different techniques for 14C dating have been used: Gas Proportional Counting (GPC), Liquid Scintillation Counting (LSC) and preparation of milligram-sized samples for AMS dating (Accelerator Mass Spectrometry). The use of several measurement techniques has initiated a need for development of a new relational database ZAGRADA (Zagreb Radiocarbon Database) since the existing software package CARBO could not satisfy the requirements for parallel processing/using of several techniques. Using the SQL procedures, and constraints defined by primary and foreign keys, ZAGRADA enforces high data integrity and provides better performances in data filtering and sorting. Additionally, the new database for 14C samples is a multi-user oriented application that can be accessed from remote computers in the work group providing thus better efficiency of laboratory activities. In order to facilitate data handling and processing in ZAGRADA, the graphical user interface is designed to be user-friendly and to perform various actions on data like input, corrections, searching, sorting and output to printer. All invalid actions performed in user interface are registered with short textual description of an error occurred and appearing on screen in message boxes. Unauthorized access is also prevented by login control and each application window has implemented support to track last changes made by the user. The implementation of a new database for 14C samples has significant contribution to scientific research performed in the Radiocarbon and Tritium Laboratory and will provide better and easier communication with customers.(author)

  12. Report on the database structuring project in fiscal 1996 related to the 'surveys on making databases for energy saving (2)'; 1996 nendo database kochiku jigyo hokokusho. Sho energy database system ka ni kansuru chosa 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    With an objective to support promotion of energy conservation in such countries as Japan, China, Indonesia, the Philippines, Thailand, Malaysia, Taiwan and Korea, primary information on energy conservation in each country was collected, and the database was structured. This paper summarizes the achievements in fiscal 1996. Based on the survey result on the database project having been progressed to date, and on various data having been collected, this fiscal year has discussed structuring the database for distribution and proliferation of the database. In the discussion, requirements for the functions to be possessed by the database, items of data to be recorded in the database, and processing of the recorded data were put into order referring to propositions on the database circumstances. Demonstrations for the database of a proliferation version were performed in the Philippines, Indonesia and China. Three hundred CDs for distribution in each country were prepared. Adjustments and confirmation on operation of the supplied computers were carried out, and the operation explaining meetings were held in China and the Philippines. (NEDO)

  13. Large Science Databases – Are Cloud Services Ready for Them?

    Directory of Open Access Journals (Sweden)

    Ani Thakar

    2011-01-01

    Full Text Available We report on attempts to put an astronomical database – the Sloan Digital Sky Survey science archive – in the cloud. We find that it is very frustrating to impossible at this time to migrate a complex SQL Server database into current cloud service offerings such as Amazon (EC2 and Microsoft (SQL Azure. Certainly it is impossible to migrate a large database in excess of a TB, but even with (much smaller databases, the limitations of cloud services make it very difficult to migrate the data to the cloud without making changes to the schema and settings that would degrade performance and/or make the data unusable. Preliminary performance comparisons show a large performance discrepancy with the Amazon cloud version of the SDSS database. These difficulties suggest that much work and coordination needs to occur between cloud service providers and their potential clients before science databases – not just large ones but even smaller databases that make extensive use of advanced database features for performance and usability – can successfully and effectively be deployed in the cloud. We describe a powerful new computational instrument that we are developing in the interim – the Data-Scope – that will enable fast and efficient analysis of the largest (petabyte scale scientific datasets.

  14. 37 CFR 383.3 - Royalty fees for public performances of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... SUBSCRIPTION SERVICES § 383.3 Royalty fees for public performances of sound recordings and the making of... regulations for all years 2007 and earlier. Such fee shall be recoupable and credited against royalties due in...

  15. Information needs of health technology assessment units and agencies in Spain.

    Science.gov (United States)

    Galnares-Cordero, Lorea; Gutiérrez-Ibarluzea, Iñaki

    2010-10-01

    The aim of this study was to analyze the information needs of Spanish health technology assessment (HTA) agencies and units to facilitate access to the resources they require to substantiate their reports. A questionnaire was designed and distributed among HTA bodies to ascertain the actual situation of subscriptions to information resources and what information specialists from these bodies considered would be the ideal subscription situation. Their information needs were then studied, and the resources that best met these needs were put forward. Following this definition, a subscriptions policy was adopted with suppliers and publishers. The survey showed that HTA bodies share a minimum of core subscriptions that includes open sources (MEDLINE, DARE) and sources that the government subscribes to for the health community (ISI Web of Science, Cochrane Library Plus). There was no common approach to determining which databases to subscribe to (UpToDate, EMBASE, Ovid EBMR, CINAHL, or ECRI). After identifying the information needs, a list of resources was proposed that would best cover these needs and, of these, subscription to the following was proposed: Scopus, Ovid EBMR, Clinical Evidence, DynaMed, ECRI, and Hayes. There are differences in the way that HTA agencies and units access the different resources of biomedical information. Combined subscription to several resources for documentation services was suggested as a way of resolving these differences.

  16. Evaluated and estimated solubility of some elements for performance assessment of geological disposal of high-level radioactive waste using updated version of thermodynamic database

    International Nuclear Information System (INIS)

    Kitamura, Akira; Doi, Reisuke; Yoshida, Yasushi

    2011-01-01

    Japan Atomic Energy Agency (JAEA) established the thermodynamic database (JAEA-TDB) for performance assessment of geological disposal of high-level radioactive waste (HLW) and TRU waste. Twenty-five elements which were important for the performance assessment of geological disposal were selected for the database. JAEA-TDB enhanced reliability of evaluation and estimation of their solubility through selecting the latest and the most reliable thermodynamic data at present. We evaluated and estimated solubility of the 25 elements in the simulated porewaters established in the 'Second Progress Report for Safety Assessment of Geological Disposal of HLW in Japan' using the JAEA-TDB and compared with those using the previous thermodynamic database (JNC-TDB). It was found that most of the evaluated and estimated solubility values were not changed drastically, but the solubility and speciation of dominant aqueous species for some elements using the JAEA-TDB were different from those using the JNC-TDB. We discussed about how to provide reliable solubility values for the performance assessment. (author)

  17. The Danish Testicular Cancer database.

    Science.gov (United States)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  18. Subscribers and newspaper subscriptions in Spain at the beginning of the XX century. Notes from an asturian perspective

    Directory of Open Access Journals (Sweden)

    Víctor Rodríguez Infiesta

    2008-12-01

    Full Text Available In the first decades of the XX century the possibility of a particular newspaper having a high number of subscribers was, from the point of view of the newspaper publishers, the best guarantee of its stability. However it was not all advantages in a system which gave rise to numerous complaints and created a peculiar relationship with the subscriber. By various means —delivery men, the post and public establishments, principally— the subscription service continually generated situations illustrative of the level of development of the Spanish press in years in which progress accelerates and deficiencies and limitations also become apparent. Limitations which, particularly in an outlying region with mostly inadequate means of communication such as Asturias, could become one of the main factors delaying the establishment of a large press with mass-readership.

  19. Evolution of the Configuration Database Design

    International Nuclear Information System (INIS)

    Salnikov, A.

    2006-01-01

    The BABAR experiment at SLAC successfully collects physics data since 1999. One of the major parts of its on-line system is the configuration database which provides other parts of the system with the configuration data necessary for data taking. Originally the configuration database was implemented in the Objectivity/DB ODBMS. Recently BABAR performed a successful migration of its event store from Objectivity/DB to ROOT and this prompted a complete phase-out of the Objectivity/DB in all other BABAR databases. It required the complete redesign of the configuration database to hide any implementation details and to support multiple storage technologies. In this paper we describe the process of the migration of the configuration database, its new design, implementation strategy and details

  20. Data collection for improved follow-up of operating experiences. SKI damage database. Contents and aims with database

    International Nuclear Information System (INIS)

    Gott, Karen

    1997-01-01

    The Stryk database is presented and discussed in conjunction with the Swedish regulations concerning structural components in nuclear installations. The database acts as a reference library for reported cracks and degradation and can be used to retrieve information about individual events or for compiling statistics and performing trend analyses

  1. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  2. Digital Dental X-ray Database for Caries Screening

    Science.gov (United States)

    Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila

    2016-06-01

    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.

  3. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  4. Description of geological data in SKBs database GEOTAB

    International Nuclear Information System (INIS)

    Sehlstedt, S.; Stark, T.

    1991-01-01

    Since 1977 the Swedish Nuclear Fuel and Waste Management Co, SKB, has been performing a research and development programme for final disposal of spent nuclear fuel. The purpose of the programme is to acquire knowledge and data of radioactive waste. Measurement for the characterisation of geological, geophysical, hydrogeological and hydrochemical conditions are performed in specific site investigations as well as for geoscientific projects. Large data volumes have been produced since the start of the programme, both raw data and results. During the years these data were stored in various formats by the different institutions and companies that performed the investigations. It was therefore decided that all data from the research and development programme should be gathered in a database. The database, called GEOTAB, is a relational database. The database comprises six main groups of data volumes. These are: Background information, geological data, geophysical data, hydrological and meteorological data, hydrochemical data, and tracer tests. This report deals with geological data and described the dataflow from the measurements at the sites to the result tables in the database. The geological investigations have been divided into three categories, and each category is stored separately in the database. They are: Surface fractures, core mapping, and chemical analyses. (authors)

  5. 37 CFR 382.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.2 Royalty fees for the... monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d)(2) and the...

  6. TWRS technical baseline database manager definition document

    International Nuclear Information System (INIS)

    Acree, C.D.

    1997-01-01

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager

  7. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  8. The STEP database through the end-users eyes--USABILITY STUDY.

    Science.gov (United States)

    Salunke, Smita; Tuleu, Catherine

    2015-08-15

    The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Neutron metrology file NMF-90. An integrated database for performing neutron spectrum adjustment calculations

    International Nuclear Information System (INIS)

    Kocherov, N.P.

    1996-01-01

    The Neutron Metrology File NMF-90 is an integrated database for performing neutron spectrum adjustment (unfolding) calculations. It contains 4 different adjustment codes, the dosimetry reaction cross-section library IRDF-90/NMF-G with covariances files, 6 input data sets for reactor benchmark neutron fields and a number of utility codes for processing and plotting the input and output data. The package consists of 9 PC HD diskettes and manuals for the codes. It is distributed by the Nuclear Data Section of the IAEA on request free of charge. About 10 MB of diskspace is needed to install and run a typical reactor neutron dosimetry unfolding problem. (author). 8 refs

  10. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  11. The Barcelona Hospital Clínic therapeutic apheresis database.

    Science.gov (United States)

    Cid, Joan; Carbassé, Gloria; Cid-Caballero, Marc; López-Púa, Yolanda; Alba, Cristina; Perea, Dolores; Lozano, Miguel

    2017-09-22

    A therapeutic apheresis (TA) database helps to increase knowledge about indications and type of apheresis procedures that are performed in clinical practice. The objective of the present report was to describe the type and number of TA procedures that were performed at our institution in a 10-year period, from 2007 to 2016. The TA electronic database was created by transferring patient data from electronic medical records and consultation forms into a Microsoft Access database developed exclusively for this purpose. Since 2007, prospective data from every TA procedure were entered in the database. A total of 5940 TA procedures were performed: 3762 (63.3%) plasma exchange (PE) procedures, 1096 (18.5%) hematopoietic progenitor cell (HPC) collections, and 1082 (18.2%) TA procedures other than PEs and HPC collections. The overall trend for the time-period was progressive increase in total number of TA procedures performed each year (from 483 TA procedures in 2007 to 822 in 2016). The tracking trend of each procedure during the 10-year period was different: the number of PE and other type of TA procedures increased 22% and 2818%, respectively, and the number of HPC collections decreased 28%. The TA database helped us to increase our knowledge about various indications and type of TA procedures that were performed in our current practice. We also believe that this database could serve as a model that other institutions can use to track service metrics. © 2017 Wiley Periodicals, Inc.

  12. IAEA Post Irradiation Examination Facilities Database

    International Nuclear Information System (INIS)

    Jenssen, Haakon; Blanc, J.Y.; Dobuisson, P.; Manzel, R.; Egorov, A.A.; Golovanov, V.; Souslov, D.

    2005-01-01

    The number of hot cells in the world in which post irradiation examination (PIE) can be performed has diminished during the last few decades. This creates problems for countries that have nuclear power plants and require PIE for surveillance, safety and fuel development. With this in mind, the IAEA initiated the issue of a catalogue within the framework of a coordinated research program (CRP), started in 1992 and completed in 1995, under the title of ''Examination and Documentation Methodology for Water Reactor Fuel (ED-WARF-II)''. Within this program, a group of technical consultants prepared a questionnaire to be completed by relevant laboratories. From these questionnaires a catalogue was assembled. The catalogue lists the laboratories and PIE possibilities worldwide in order to make it more convenient to arrange and perform contractual PIE within hot cells on water reactor fuels and core components, e.g. structural and absorber materials. This catalogue was published as working material in the Agency in 1996. During 2002 and 2003, the catalogue was converted to a database and updated through questionnaires to the laboratories in the Member States of the Agency. This activity was recommended by the IAEA Technical Working Group on Water Reactor Fuel Performance and Technology (TWGFPT) at its plenary meeting in April 2001. The database consists of five main areas about PIE facilities: acceptance criteria for irradiated components; cell characteristics; PIE techniques; refabrication/instrumentation capabilities; and storage and conditioning capabilities. The content of the database represents the status of the listed laboratories as of 2003. With the database utilizing a uniform format for all laboratories and details of technique, it is hoped that the IAEA Member States will be able to use this catalogue to select laboratories most relevant to their particular needs. The database can also be used to compare the PIE capabilities worldwide with current and future

  13. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  14. The Danish Sarcoma Database

    DEFF Research Database (Denmark)

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  15. Domain Regeneration for Cross-Database Micro-Expression Recognition

    Science.gov (United States)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  16. The magnet database system

    International Nuclear Information System (INIS)

    Baggett, P.; Delagi, N.; Leedy, R.; Marshall, W.; Robinson, S.L.; Tompkins, J.C.

    1991-01-01

    This paper describes the current status of MagCom, a central database of SSC magnet information that is available to all magnet scientists via network connections. The database has been designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will help magnet scientists to track and control the production process and to correlate the performance of magnets with the properties of their constituents

  17. Database Cancellation: The "Hows" and "Whys"

    Science.gov (United States)

    Shapiro, Steven

    2012-01-01

    Database cancellation is one of the most difficult tasks performed by a librarian. This may seem counter-intuitive but, psychologically, it is certainly true. When a librarian or a team of librarians has invested a great deal of time doing research, talking to potential users, and conducting trials before deciding to subscribe to a database, they…

  18. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    Science.gov (United States)

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  19. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  20. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  1. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  2. Solid Waste Projection Model: Database User's Guide

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  3. Sorption, Diffusion and Solubility Databases for Performance Assessment; Base de Datos de Sorcion, Difusion y Solubilidad para la Evacuacion del Comportamiento

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Gutierrez, M [Ciemat, Madrid (Spain)

    2000-07-01

    This report presents a deterministic and probabilistic databases for application in Performance Assessment of a high-level radioactive waste disposal. This work includes a theoretical description of sorption, diffusion and solubility phenomena of radionuclides in geological media. The report presents and compares the database of different nuclear wastes management agencies, describes the materials in the Spanish reference system, and the results of sorption diffusion and solubility in this system, with both the deterministic and probabilistic approximation. the probabilistic approximation is presented in the form of probability density functions (pdf). (Author) 52 refs.

  4. Development and Exploration of a Regional Stormwater BMP Performance Database to Parameterize an Integrated Decision Support Tool (i-DST)

    Science.gov (United States)

    Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.

    2017-12-01

    Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.

  5. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  6. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  7. How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-Fit Statistics in Categorical Data Analysis

    Science.gov (United States)

    Maydeu-Olivares, Alberto; Montano, Rosa

    2013-01-01

    We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model…

  8. Update of a thermodynamic database for radionuclides to assist solubility limits calculation for performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Duro, L.; Grive, M.; Cera, E.; Domenech, C.; Bruno, J. (Enviros Spain S.L., Barcelona (ES))

    2006-12-15

    This report presents and documents the thermodynamic database used in the assessment of the radionuclide solubility limits within the SR-Can Exercise. It is a supporting report to the solubility assessment. Thermodynamic data are reviewed for 20 radioelements from Groups A and B, lanthanides and actinides. The development of this database is partially based on the one prepared by PSI and NAGRA. Several changes, updates and checks for internal consistency and completeness to the reference NAGRA-PSI 01/01 database have been conducted when needed. These modifications are mainly related to the information from the various experimental programmes and scientific literature available until the end of 2003. Some of the discussions also refer to a previous database selection conducted by Enviros Spain on behalf of ANDRA, where the reader can find additional information. When possible, in order to optimize the robustness of the database, the description of the solubility of the different radionuclides calculated by using the reported thermodynamic database is tested in front of experimental data available in the open scientific literature. When necessary, different procedures to estimate gaps in the database have been followed, especially accounting for temperature corrections. All the methodologies followed are discussed in the main text

  9. Update of a thermodynamic database for radionuclides to assist solubility limits calculation for performance assessment

    International Nuclear Information System (INIS)

    Duro, L.; Grive, M.; Cera, E.; Domenech, C.; Bruno, J.

    2006-12-01

    This report presents and documents the thermodynamic database used in the assessment of the radionuclide solubility limits within the SR-Can Exercise. It is a supporting report to the solubility assessment. Thermodynamic data are reviewed for 20 radioelements from Groups A and B, lanthanides and actinides. The development of this database is partially based on the one prepared by PSI and NAGRA. Several changes, updates and checks for internal consistency and completeness to the reference NAGRA-PSI 01/01 database have been conducted when needed. These modifications are mainly related to the information from the various experimental programmes and scientific literature available until the end of 2003. Some of the discussions also refer to a previous database selection conducted by Enviros Spain on behalf of ANDRA, where the reader can find additional information. When possible, in order to optimize the robustness of the database, the description of the solubility of the different radionuclides calculated by using the reported thermodynamic database is tested in front of experimental data available in the open scientific literature. When necessary, different procedures to estimate gaps in the database have been followed, especially accounting for temperature corrections. All the methodologies followed are discussed in the main text

  10. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  11. Switching the Fermilab Accelerator Control System to a relational database

    International Nuclear Information System (INIS)

    Shtirbu, S.

    1993-01-01

    The accelerator control system (open-quotes ACNETclose quotes) at Fermilab is using a made-in-house, Assembly language, database. The database holds device information, which is mostly used for finding out how to read/set devices and how to interpret alarms. This is a very efficient implementation, but it lacks the needed flexibility and forces applications to store data in private/shared files. This database is being replaced by an off-the-shelf relational database (Sybase 2 ). The major constraints on switching are the necessity to maintain/improve response time and to minimize changes to existing applications. Innovative methods are used to help achieve the required performance, and a layer seven gateway simulates the old database for existing programs. The new database is running on a DEC ALPHA/VMS platform, and provides better performance. The switch is also exposing problems with the data currently stored in the database, and is helping in cleaning up erroneous data. The flexibility of the new relational database is going to facilitate many new applications in the future (e.g. a 3D presentation of device location). The new database is expected to fully replace the old database during this summer's shutdown

  12. The Danish Hysterectomy and Hysteroscopy Database

    DEFF Research Database (Denmark)

    Topsøe, Märta Fink; Ibfelt, Else Helene; Settnes, Annette

    2016-01-01

    AIM OF THE DATABASE: The steering committee of the Danish Hysterectomy and Hysteroscopy Database (DHHD) has defined the objective of the database: the aim is firstly to reduce complications, readmissions, reoperations; secondly to specify the need for hospitalization after hysterectomy; thirdly...... DATA: Annually approximately 4,300 hysterectomies and 3,200 operative hysteroscopies are performed in Denmark. Since the establishment of the database in 2003, 50,000 hysterectomies have been registered. DHHD's nationwide cooperation and research have led to national guidelines and regimes. Annual...... national meetings and nationwide workshops have been organized. CONCLUSION: The use of vaginal and laparoscopic hysterectomy methods has increased during the past decade and the overall complication rate and hospital stay have declined. The regional variation in operation methods has also decreased....

  13. Perception of quality of health delivery and health insurance subscription in Ghana.

    Science.gov (United States)

    Amo-Adjei, Joshua; Anku, Prince Justin; Amo, Hannah Fosuah; Effah, Mavis Osei

    2016-07-29

    National health insurance schemes (NHIS) in developing countries and perhaps in developed countries as well is a considered a pro-poor intervention by helping to bridge the financial burden of access to quality health care. Perceptions of quality of health service could have immense impacts on enrolment. This paper shows how perception of service quality under Ghana's insurance programme contributes to health insurance subscription. The study used the 2014 Ghana Demographic and Health Survey (GDHS) dataset. Both descriptive proportions and binary logistic regression techniques were applied to generate results that informed the discussion. Our results show that a high proportion of females (33 %) and males (35 %) felt that the quality of health provided to holders of the NHIS card was worse. As a result, approximately 30 % of females and 22%who perceived health care as worse by holding an insurance card did not own an insurance policy. While perceptions of differences in quality among females were significantly different (AOR = 0.453 [95 % CI = 0.375, 0.555], among males, the differences in perceptions of quality of health services under the NHIS were independent in the multivariable analysis. Beyond perceptions of quality, being resident in the Upper West region was an important predictor of health insurance ownership for both males and females. For such a social and pro-poor intervention, investing in quality of services to subscribers, especially women who experience enormous health risks in the reproductive period can offer important gains to sustaining the scheme as well as offering affordable health services.

  14. Efficient Partitioning of Large Databases without Query Statistics

    Directory of Open Access Journals (Sweden)

    Shahidul Islam KHAN

    2016-11-01

    Full Text Available An efficient way of improving the performance of a database management system is distributed processing. Distribution of data involves fragmentation or partitioning, replication, and allocation process. Previous research works provided partitioning based on empirical data about the type and frequency of the queries. These solutions are not suitable at the initial stage of a distributed database as query statistics are not available then. In this paper, I have presented a fragmentation technique, Matrix based Fragmentation (MMF, which can be applied at the initial stage as well as at later stages of distributed databases. Instead of using empirical data, I have developed a matrix, Modified Create, Read, Update and Delete (MCRUD, to partition a large database properly. Allocation of fragments is done simultaneously in my proposed technique. So using MMF, no additional complexity is added for allocating the fragments to the sites of a distributed database as fragmentation is synchronized with allocation. The performance of a DDBMS can be improved significantly by avoiding frequent remote access and high data transfer among the sites. Results show that proposed technique can solve the initial partitioning problem of large distributed databases.

  15. High Performance Protein Sequence Database Scanning on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Adrianto Wirawan

    2009-01-01

    Full Text Available The enormous growth of biological sequence databases has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing rapidly as well. The recent emergence of low cost parallel multicore accelerator technologies has made it possible to reduce execution times of many bioinformatics applications. In this paper, we demonstrate how the Cell Broadband Engine can be used as a computational platform to accelerate two approaches for protein sequence database scanning: exhaustive and heuristic. We present efficient parallelization techniques for two representative algorithms: the dynamic programming based Smith–Waterman algorithm and the popular BLASTP heuristic. Their implementation on a Playstation®3 leads to significant runtime savings compared to corresponding sequential implementations.

  16. JT-60 database system, 2

    International Nuclear Information System (INIS)

    Itoh, Yasuhiro; Kurihara, Kenichi; Kimura, Toyoaki.

    1987-07-01

    The JT-60 central control system, ''ZENKEI'' collects the control and instrumentation data relevant to discharge and device status data for plant monitoring. The former of the engineering data amounts to about 3 Mbytes per shot of discharge. The ''ZENKEI'' control system which consists of seven minicomputers for on-line real-time control has little performance of handling such a large amount of data for physical and engineering analysis. In order to solve this problem, it was planned to establish the experimental database on the Front-end Processor (FEP) of general purpose large computer in JAERI Computer Center. The database management system (DBMS), therefore, has been developed for creating the database during the shot interval. The engineering data are shipped up from ''ZENKEI'' to FEP through the dedicated communication line after the shot. The hierarchical data model has been adopted in this database, which consists of the data files with tree structure of three keys of system, discharge type and shot number. The JT-60 DBMS provides the data handling packages of subroutines for interfacing the database with user's application programs. The subroutine packages for supporting graphic processing and the function of access control for security of the database are also prepared in this DBMS. (author)

  17. Solutions for medical databases optimal exploitation.

    Science.gov (United States)

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  18. Creation of the NaSCoRD Database

    Energy Technology Data Exchange (ETDEWEB)

    Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jankovsky, Zachary Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stuart, William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include: overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.

  19. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  20. Investigating the Relationship between Sprint and Jump Performances with Velocity and Power Parameters during Propulsive Phase of the Loaded-Squat Jump Exercise

    Science.gov (United States)

    Can, Ibrahim

    2018-01-01

    The purpose of this study was to investigate the relationship between sprint and jump performance with velocity parameters in the loaded-squat jump exercise (SQ[subscript Loaded]). In accordance with this purpose, a total of 13 athletes competing in martial sports have participated in this study voluntarily. In this study, sprint tests, vertical…

  1. Active in-database processing to support ambient assisted living systems.

    Science.gov (United States)

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  2. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  3. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  4. Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks

    Directory of Open Access Journals (Sweden)

    Thomas Beyhl

    2016-12-01

    Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.

  5. Microwave-Assisted Synthesis of Red-Light Emitting Au Nanoclusters with the Use of Egg White

    Science.gov (United States)

    Tian, Jinghan; Yan, Lei; Sang, Aohua; Yuan, Hongyan; Zheng, Baozhan; Xiao, Dan

    2014-01-01

    We developed a simple, cost-effective, and eco-friendly method to synthesize gold nanoclusters (AuNCs) with red fluorescence. The experiment was performed using HAuCl[subscript 4], egg white, Na[subscript 2]CO[subscript 3] (known as soda ash or washing soda), and a microwave oven. In our experiment, fluorescent AuNCs were prepared within a…

  6. Reactome graph database: Efficient access to complex pathway data

    Science.gov (United States)

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  7. Reactome graph database: Efficient access to complex pathway data.

    Directory of Open Access Journals (Sweden)

    Antonio Fabregat

    2018-01-01

    Full Text Available Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j as well as the new ContentService (REST API that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  8. Reactome graph database: Efficient access to complex pathway data.

    Science.gov (United States)

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  9. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  10. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  11. JNC thermodynamic database for performance assessment of high-level radioactive waste disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Yui, Mikazu; Azuma, Jiro; Shibata, Masahiro [Japan Nuclear Cycle Development Inst., Tokai Works, Waste Isolation Research Division, Tokai, Ibaraki (Japan)

    1999-11-01

    This report is a summary of status, frozen datasets, and future tasks of the JNC (Japan Nuclear Cycle Development Institute) thermodynamic database (JNC-TDB) for assessing performance of high-level radioactive waste in geological environments. The JNC-TDB development was carried out after the first progress report on geological disposal research in Japan (H-3). In the development, thermodynamic data (equilibrium constants at 25degC, I=0) for important radioactive elements were selected/determined based on original experimental data using different models (e.g., SIT, Pitzer). As a result, the reliability and traceability of the data for most of the important elements were improved over those of the PNC-TDB used in H-3 report. For detailed information of data analysis and selections for each element, see the JNC technical reports listed in this document. (author)

  12. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    -incidence relationships. We instrument ImG model with sets of optional and application-specific constraints which can be used to check validity of meshes for a specific class of object such as manifold, pseudo-manifold, and simplicial manifold. We conducted experiments to measure the performance of the graph database solution in processing mesh queries and compare it with GrAL mesh library and PostgreSQL database on synthetic and real mesh datasets. The experiments show that each system perform well on specific types of mesh queries, e.g., graph databases perform well on global path-intensive queries. In the future, we investigate database operations for the ImG model and design a mesh query language.

  13. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  14. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  15. Comparative performance measures of relational and object-oriented databases using High Energy Physics data

    International Nuclear Information System (INIS)

    Marstaller, J.

    1993-12-01

    The major experiments at the SSC are expected to produce up to 1 Petabyte of data per year. The use of database techniques can significantly reduce the time it takes to access data. The goal of this project was to test which underlying data model, the relational or the object-oriented, would be better suited for archival and accessing high energy data. We describe the relational and the object-oriented data model and their implementation in commercial database management systems. To determine scalability we tested both implementations for 10-MB and 100-MB databases using storage and timing criteria

  16. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  17. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  18. Oracle database 12c the complete reference

    CERN Document Server

    Bryla, Bob

    2014-01-01

    Maintain a scalable, highly available enterprise platform and reduce complexity by leveraging the powerful new tools and cloud enhancements of Oracle Database 12c. This authoritative Oracle Press guide offers complete coverage of installation, configuration, tuning, and administration. Find out how to build and populate Oracle databases, perform effective queries, design applications, and secure your enterprise data

  19. Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment

    Science.gov (United States)

    2012-09-01

    Suspicious Activity Reporting Initiative ODNI Office of the Director of National Intelligence OSINT Open Source Intelligence PERF Police Executive...Fusion centers are encouraged to explore all available information sources to enhance the intelligence analysis process. It follows then that fusion...WSIC also utilizes ACCURINT, a web-based, subscription service. ACCURINT searches open source information and is able to collect and collate

  20. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  1. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  2. Danish Colorectal Cancer Group Database.

    Science.gov (United States)

    Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H

    2016-01-01

    The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001-2003 to database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long

  3. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  4. The Danish Testicular Cancer database

    Directory of Open Access Journals (Sweden)

    Daugaard G

    2016-10-01

    Full Text Available Gedske Daugaard,1 Maria Gry Gundgaard Kier,1 Mikkel Bandak,1 Mette Saksø Mortensen,1 Heidi Larsson,2 Mette Søgaard,2 Birgitte Groenkaer Toft,3 Birte Engvad,4 Mads Agerbæk,5 Niels Vilstrup Holm,6 Jakob Lauritsen1 1Department of Oncology 5073, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 2Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 3Department of Pathology, Copenhagen University Hospital, Rigshospitalet, Copenhagen, 4Department of Pathology, Odense University Hospital, Odense, 5Department of Oncology, Aarhus University Hospital, Aarhus, 6Department of Oncology, Odense University Hospital, Odense, Denmark Aim: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database. The aim is to improve the quality of care for patients with testicular cancer (TC in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. Study population: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. Main variables and descriptive data: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions

  5. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  6. Practice databases and their uses in clinical research.

    Science.gov (United States)

    Tierney, W M; McDonald, C J

    1991-04-01

    A few large clinical information databases have been established within larger medical information systems. Although they are smaller than claims databases, these clinical databases offer several advantages: accurate and timely data, rich clinical detail, and continuous parameters (for example, vital signs and laboratory results). However, the nature of the data vary considerably, which affects the kinds of secondary analyses that can be performed. These databases have been used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation, resource use, quality assurance, and decision analysis. In addition, practice databases can be used to identify subjects for prospective studies. Further methodologic developments are necessary to deal with the prevalent problems of missing data and various forms of bias if such databases are to grow and contribute valuable clinical information.

  7. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  8. Nuclear data processing using a database management system

    International Nuclear Information System (INIS)

    Castilla, V.; Gonzalez, L.

    1991-01-01

    A database management system that permits the design of relational models was used to create an integrated database with experimental and evaluated nuclear data.A system that reduces the time and cost of processing was created for computers type EC or compatibles.A set of programs for the conversion from nuclear calculated data output format to EXFOR format was developed.A dictionary to perform a retrospective search in the ENDF database was created too

  9. Determination of the Acid Dissociation Constant of a Phenolic Acid by High Performance Liquid Chromatography: An Experiment for the Upper Level Analytical Chemistry Laboratory

    Science.gov (United States)

    Raboh, Ghada

    2018-01-01

    A high performance liquid chromatography (HPLC) experiment for the upper level analytical chemistry laboratory is described. The students consider the effect of mobile-phase composition and pH on the retention times of ionizable compounds in order to determine the acid dissociation constant, K[subscript a], of a phenolic acid. Results are analyzed…

  10. The Danish national quality database for births

    DEFF Research Database (Denmark)

    Andersson, Charlotte Brix; Flems, Christina; Kesmodel, Ulrik Schiøler

    2016-01-01

    Aim of the database: The aim of the Danish National Quality Database for Births (DNQDB) is to measure the quality of the care provided during birth through specific indicators. Study population: The database includes all hospital births in Denmark. Main variables: Anesthesia/pain relief, continuous...... Medical Birth Registry. Registration to the Danish Medical Birth Registry is mandatory for all maternity units in Denmark. During the 5 years, performance has improved in the areas covered by the process indicators and for some of the outcome indicators. Conclusion: Measuring quality of care during...

  11. Analysis/design of tensile property database system

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, D. H.; Jeon, I.; Lyu, W. S.

    2001-01-01

    The data base construction using the data produced from tensile experiment can increase the application of test results. Also, we can get the basic data ease from database when we prepare the new experiment and can produce high quality result by compare the previous data. The development part must be analysis and design more specific to construct the database and after that, we can offer the best quality to customers various requirements. In this thesis, the analysis and design was performed to develop the database for tensile extension property

  12. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  13. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  14. A high-energy nuclear database proposal

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.; UC Davis, CA

    2006-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from the Bevalac, AGS and SPS to RHIC and LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews. (author)

  15. Audit Database and Information Tracking System

    Data.gov (United States)

    Social Security Administration — This database contains information about the Social Security Administration's audits regarding SSA agency performance and compliance. These audits can be requested...

  16. Online Database Allows for Quick and Easy Monitoring and Reporting of Supplementary Feeding Program Performance: An Analysis of World Vision CMAM Programs (2006-2013)

    International Nuclear Information System (INIS)

    Emary, Colleen; Aidam, Bridget; Roberton, Tim

    2014-01-01

    Full text: Background: Despite the widespread implementation of interventions to address moderate acute malnutrition (MAM), lack of robust monitoring systems have hindered evaluation of the effectiveness of approaches to prevent and treat MAM. Since 2006, World Vision (WV) has provided supplementary feeding to 280,518 children 6-59 months of age (U5) and 105,949 pregnant and lactating women (PLW) as part of Community Based Management of Acute Malnutrition (CMAM) programming. The Excel-based system initially used for monitoring individual site programs faced numerous challenges. It was time consuming, prone to human error, lost data as a result of staff turnover and hence use of data to inform program performance was limited. In 2010, World Vision International (WVI)’s Nutrition Centre of Expertise (NCOE) established an online database to overcome these limitations. The aim of the database was to improve monitoring and reporting of WV’s CMAM programs. As of December 2013, the database has been rolled out in 14 countries Burundi, Chad, DRC, Ethiopia, Kenya, Mali, Mauritania, Niger, Sudan, Pakistan, South Sudan, Somalia, Zimbabwe and Zambia. Methods: The database includes data on admissions (mid-upper arm circumference, weight for height, oedema, referral) and discharge outcomes (recovered, died, defaulted, non-recovered, referral) for Supplementary Feeding Programs (SFPs) for children U5 as well as PLWs. A quantitative analysis of the data sets available was conducted to identify issues with data quality and draw findings from the data itself. Variations in program performance as compared to Sphere standards were determined by country and aggregated over the 14 countries. In addition, time trend analyses were conducted to determine significant different and seasonality effects. Results: Most data was related to program admissions from 2010 to July 2013, though some retrospective program data was available from 2006 to 2009. The countries with the largest number

  17. Architecture of Automated Database Tuning Using SGA Parameters

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2012-05-01

    Full Text Available Business Data always growth from kilo byte, mega byte, giga byte, tera byte, peta byte, and so far. There is no way to avoid this increasing rate of data till business still running. Because of this issue, database tuning be critical part of a information system. Tuning a database in a cost-effective manner is a growing challenge. The total cost of ownership (TCO of information technology needs to be significantly reduced by minimizing people costs. In fact, mistakes in operations and administration of information systems are the single most reasons for system outage and unacceptable performance [3]. One way of addressing the challenge of total cost of ownership is by making information systems more self-managing. A particularly difficult piece of the ambitious vision of making database systems self-managing is the automation of database performance tuning. In this paper, we will explain the progress made thus far on this important problem. Specifically, we will propose the architecture and Algorithm for this problem.

  18. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  19. Query Processing and Interlinking of Fuzzy Object-Oriented Database

    OpenAIRE

    Shweta Dwivedi; Santosh Kumar

    2017-01-01

    Due to the many limitation and poor data handling in the existing relational database, the software professional and researchers moves towards the object-oriented database which has much better capability to handling the real and complex real world data i.e. clear and crisp data and also have the capability to perform some huge and complex queries in an effective manner. On the other hand, a new approach in database is introduced named as Fuzzy Object-Oriented Database (FOOD); it has all the ...

  20. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm

  1. Radiation protection databases of nuclear safety regulatory authority

    International Nuclear Information System (INIS)

    Janzekovic, H.; Vokal, B.; Krizman, M.

    2003-01-01

    Radiation protection and nuclear safety of nuclear installations have a common objective, protection against ionising radiation. The operational safety of a nuclear power plant is evaluated using performance indicators as for instance collective radiation exposure, unit capability factor, unplanned capability loss factor, etc. As stated by WANO (World Association of Nuclear Operators) the performance indicators are 'a management tool so each operator can monitor its own performance and progress, set challenging goals for improvement and consistently compare performance with that of other plants or industry'. In order to make the analysis of the performance indicators feasible to an operator as well as to regulatory authorities a suitable database should be created based on the data related to a facility or facilities. Moreover, the international bodies found out that the comparison of radiation protection in nuclear facilities in different countries could be feasible only if the databases with well defined parameters are established. The article will briefly describe the development of international databases regarding radiation protection related to nuclear facilities. The issues related to the possible development of the efficient radiation protection control of a nuclear facility based on experience of the Slovenian Nuclear Safety Administration will be presented. (author)

  2. JAEA thermodynamic database for performance assessment of geological disposal of high-level and TRU wastes. Selection of thermodynamic data of cobalt and nickel

    International Nuclear Information System (INIS)

    Kitamura, Akira; Yui, Mikazu; Kirishima, Akira; Saito, Takumi; Shibutani, Sanae; Tochiyama, Osamu

    2009-11-01

    Within the scope of the JAEA thermodynamic database project for performance assessment of geological disposal of high-level and TRU wastes, the selection of the thermodynamic data on the inorganic compounds and complexes of cobalt and nickel have been carried out. For cobalt, extensive literature survey has been performed and all the obtained literatures have been carefully reviewed to select the thermodynamic data. Selection of thermodynamic data of nickel has been based on a thermodynamic database published by the Nuclear Energy Agency in the Organisation for Economic Co-operation and Development (OECD/NEA), which has been carefully reviewed by the authors, and then thermodynamic data have been selected after surveying latest literatures. Based on the similarity of chemical properties between cobalt and nickel, complementary thermodynamic data of nickel and cobalt species expected under the geological disposal condition have been selected to complete the thermodynamic data set for the performance assessment of geological disposal of radioactive wastes. (author)

  3. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  4. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  5. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  6. Performance Improvement with Web Based Database on Library Information System of Smk Yadika 5

    Directory of Open Access Journals (Sweden)

    Pualam Dipa Nusantara

    2015-12-01

    Full Text Available The difficulty in managing the data of books collection in the library is a problem that is often faced by the librarian that effect the quality of service. Arrangement and recording a collection of books in the file system of separate applications Word and Excel, as well as transaction handling borrowing and returning books, therehas been no integrated records. Library system can manage the book collection. This system can reduce the problems often experienced by library staff when serving students in borrowing books. There so frequent difficulty in managing the books that still in borrowed state. This system will also record a collection of late fees or lost library conducted by students (borrowers. The conclusion of this study is library performance can be better with the library system using web database.

  7. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  8. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  9. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  10. Approximate spatio-temporal top-k publish/subscribe

    KAUST Repository

    Chen, Lisi

    2018-04-26

    Location-based publish/subscribe plays a significant role in mobile information disseminations. In this light, we propose and study a novel problem of processing location-based top-k subscriptions over spatio-temporal data streams. We define a new type of approximate location-based top-k subscription, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription, that continuously feeds users with relevant spatio-temporal messages by considering textual similarity, spatial proximity, and information freshness. Different from existing location-based top-k subscriptions, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription can automatically adjust the triggering condition by taking the triggering score of other subscriptions into account. The group filtering efficacy can be substantially improved by sacrificing the publishing result quality with a bounded guarantee. We conduct extensive experiments on two real datasets to demonstrate the performance of the developed solutions.

  11. Approximate spatio-temporal top-k publish/subscribe

    KAUST Repository

    Chen, Lisi; Shang, Shuo

    2018-01-01

    Location-based publish/subscribe plays a significant role in mobile information disseminations. In this light, we propose and study a novel problem of processing location-based top-k subscriptions over spatio-temporal data streams. We define a new type of approximate location-based top-k subscription, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription, that continuously feeds users with relevant spatio-temporal messages by considering textual similarity, spatial proximity, and information freshness. Different from existing location-based top-k subscriptions, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription can automatically adjust the triggering condition by taking the triggering score of other subscriptions into account. The group filtering efficacy can be substantially improved by sacrificing the publishing result quality with a bounded guarantee. We conduct extensive experiments on two real datasets to demonstrate the performance of the developed solutions.

  12. Microturbulence and Flow Shear in High-performance JET ITB Plasma; TOPICAL

    International Nuclear Information System (INIS)

    R.V. Budny; A. Andre; A. Bicoulet; C. Challis; G.D. Conway; W. Dorland; D.R. Ernst; T.S. Hahm; T.C. Hender; D. McCune; G. Rewoldt; S.E. Sharapov

    2001-01-01

    The transport, flow shear, and linear growth rates of microturbulence are studied for a Joint European Torus (JET) plasma with high central q in which an internal transport barrier (ITB) forms and grows to a large radius. The linear microturbulence growth rates of the fastest growing (most unstable) toroidal modes with high toroidal mode number are calculated using the GS2 and FULL gyrokinetic codes. These linear growth rates, gamma (subscript lin) are large, but the flow-shearing rates, gamma (subscript ExB) (dominated by the toroidal rotation contribution) are also comparably large when and where the ITB exists

  13. Generic Entity Resolution in Relational Databases

    Science.gov (United States)

    Sidló, Csaba István

    Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.

  14. Improvement of database on glass dissolution

    International Nuclear Information System (INIS)

    Hayashi, Maki; Sasamoto, Hiroshi; Yoshikawa, Hideki

    2008-03-01

    In geological disposal system, high-level radioactive waste (HLW) glass is expected to retain radionuclide for the long term as the first barrier to prevent radionuclide release. The advancement of its performance assessment technology leads to the reliability improvement of the safety assessment of entire geological disposal system. For this purpose, phenomenological studies for improvement of scientific understanding of dissolution/alteration mechanisms, and development of robust dissolution/alteration model based on the study outcomes are indispensable. The database on glass dissolution has been developed for supporting these studies. This report describes improvement of the prototype glass database. Also, this report gives an example of the application of the database for reliability assessment of glass dissolution model. (author)

  15. Management of virtualized infrastructure for physics databases

    International Nuclear Information System (INIS)

    Topurov, Anton; Gallerani, Luigi; Chatal, Francois; Piorkowski, Mariusz

    2012-01-01

    Demands for information storage of physics metadata are rapidly increasing together with the requirements for its high availability. Most of the HEP laboratories are struggling to squeeze more from their computer centers, thus focus on virtualizing available resources. CERN started investigating database virtualization in early 2006, first by testing database performance and stability on native Xen. Since then we have been closely evaluating the constantly evolving functionality of virtualisation solutions for database and middle tier together with the associated management applications – Oracle's Enterprise Manager and VM Manager. This session will detail our long experience in dealing with virtualized environments, focusing on newest Oracle OVM 3.0 for x86 and Oracle Enterprise Manager functionality for efficiently managing your virtualized database infrastructure.

  16. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  17. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  18. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  19. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  20. Databases for BaBar Datastream Calibrations and Prompt Reconstruction Processes

    International Nuclear Information System (INIS)

    Bartelt, John E

    1998-01-01

    We describe the design of databases used for performing datastream calibrations in the BABAR experiment, involving data accumulated on multiple processors and possibly over several blocks of events (''ConsBlocks''). The database for tracking the history and status of the ConsBlocks, along with similar databases needed by ''Prompt Reconstruction'' are also described

  1. A Unit-Test Framework for Database Applications

    DEFF Research Database (Denmark)

    Christensen, Claus Abildgaard; Gundersborg, Steen; de Linde, Kristian

    The outcome of a test of an application that stores data in a database naturally depends on the state of the database. It is therefore important that test developers are able to set up and tear down database states in a simple and efficient manner. In existing unit-test frameworks, setting up...... test can be minimized. In addition, the reuse between unit tests can speed up the execution of test suites. A performance test on a medium-size project shows a 40% speed up and an estimated 25% reduction in the number of lines of test code....

  2. Developing of database on nuclear power engineering and purchase of ORACLE system

    International Nuclear Information System (INIS)

    Liu Renkang

    1996-01-01

    This paper presents a point of view according development of database on the nuclear power engineering and performance of ORACLE database manager system. ORACLE system is a practical database system for purchasing

  3. Proposal for a High Energy Nuclear Database

    International Nuclear Information System (INIS)

    Brown, David A.; Vogt, Ramona

    2005-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac and AGS to RHIC to CERN-LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews

  4. Data Cleaning and Semantic Improvement in Biological Databases

    Directory of Open Access Journals (Sweden)

    Apiletti Daniele

    2006-12-01

    Full Text Available Public genomic and proteomic databases can be affected by a variety of errors. These errors may involve either the description or the meaning of data (namely, syntactic or semantic errors. We focus our analysis on the detection of semantic errors, in order to verify the accuracy of the stored information. In particular, we address the issue of data constraints and functional dependencies among attributes in a given relational database. Constraints and dependencies show semantics among attributes in a database schema and their knowledge may be exploited to improve data quality and integration in database design, and to perform query optimization and dimensional reduction.

  5. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  6. The NAGRA/PSI thermochemical database: new developments

    International Nuclear Information System (INIS)

    Hummel, W.; Berner, U.; Thoenen, T.; Pearson, F.J.Jr.

    2000-01-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  7. The NAGRA/PSI thermochemical database: new developments

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, W.; Berner, U.; Thoenen, T. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Pearson, F.J.Jr. [Ground-Water Geochemistry, New Bern, NC (United States)

    2000-07-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  8. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  9. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  10. JAEA thermodynamic database for performance assessment of geological disposal of high-level and TRU wastes. Refinement of thermodynamic data for trivalent actinoids and samarium

    International Nuclear Information System (INIS)

    Kitamura, Akira; Fujiwara, Kenso; Yui, Mikazu

    2010-01-01

    Within the scope of the JAEA thermodynamic database project for performance assessment of geological disposal of high-level radioactive and TRU wastes, the refinement of the thermodynamic data for the inorganic compounds and complexes of trivalent actinoids (actinium(III), plutonium(III), americium(III) and curium(III)) and samarium(III) was carried out. Refinement of thermodynamic data for these elements was based on the thermodynamic database for americium published by the Nuclear Energy Agency in the Organisation for Economic Co-operation and Development (OECD/NEA). Based on the similarity of chemical properties among trivalent actinoids and samarium, complementary thermodynamic data for their species expected under the geological disposal conditions were selected to complete the thermodynamic data set for the performance assessment of geological disposal of radioactive wastes. (author)

  11. Design and implementation of the ITPA confinement profile database

    Energy Technology Data Exchange (ETDEWEB)

    Walters, Malcolm E-mail: malcolm.walters@ukaea.org.uk; Roach, Colin

    2004-06-01

    One key goal of the fusion program is to improve the accuracy of physics models in describing existing experiments, so as to make better predictions of the performance of future fusion devices. To support this goal, databases of experimental results from multiple machines have been assembled to facilitate the testing of physics models over a wide range of operating conditions and plasma parameters. One such database was the International Multi-Tokamak Profile Database. This database has more recently been substantially revamped to exploit newer technologies, and is now known as the ITPA confinement profile database http://www.tokamak-profiledb.ukaea.org.uk. The overall design of the updated system will be outlined and the implementation of the relational database part will be described in detail.

  12. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  13. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  14. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  15. Comparison of performance indicators of different types of reactors based on ISOE database

    International Nuclear Information System (INIS)

    Janzekovic, H.; Krizman, M.

    2005-01-01

    The optimisation of the operation of a nuclear power plant (NPP) is a challenging issue due to the fact that besides general management issues, a risk associated to nuclear facilities should be included. In order to optimise the radiation protection programmes in around 440 reactors in operation with more than 500 000 monitored workers each year, the international exchange of performance indicators (PI) related to radiation protection issues seems to be essential. Those indicators are a function of a type of a reactor as well as the age and the quality of the management of the reactor. in general three main types of radiation protection PI could be recognised. These are: occupational exposure of workers, public exposure and management of PI related to radioactive waste. The occupational exposure could be efficiently studied using ISOC database. The dependence of occupational exposure on different types of reactors, e.g. PWR, BWR, are given, analysed and compared. (authors)

  16. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  17. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  18. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  19. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  20. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  1. CMS experience with online and offline Databases

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The CMS experiment is made of many detectors which in total sum up to more than 75 million channels. The online database stores the configuration data used to configure the various parts of the detector and bring it in all possible running states. The database also stores the conditions data, detector monitoring parameters of all channels (temperatures, voltages), detector quality information, beam conditions, etc. These quantities are used by the experts to monitor the detector performance in detail, as they occupy a very large space in the online database they cannot be used as-is for offline data reconstruction. For this, a "condensed" set of the full information, the "conditions data", is created and copied to a separate database used in the offline reconstruction. The offline conditions database contains the alignment and calibrations data for the various detectors. Conditions data sets are accessed by a tag and an interval of validity through the offline reconstruction program CMSSW, written in C++. Pe...

  2. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  3. Biogas composition and engine performance, including database and biogas property model

    NARCIS (Netherlands)

    Bruijstens, A.J.; Beuman, W.P.H.; Molen, M. van der; Rijke, J. de; Cloudt, R.P.M.; Kadijk, G.; Camp, O.M.G.C. op den; Bleuanus, W.A.J.

    2008-01-01

    In order to enable this evaluation of the current biogas quality situation in the EU; results are presented in a biogas database. Furthermore the key gas parameter Sonic Bievo Index (influence on open loop A/F-ratio) is defined and other key gas parameters like the Methane Number (knock resistance)

  4. Diffusivity database (DDB) for major rocks. Database for the second progress report

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Haruo

    1999-10-01

    A database for diffusivity for a data setting of effective diffusion coefficients in rock matrices in the second progress report, was developed. In this database, 3 kinds of diffusion coefficients: effective diffusion coefficient (De), apparent diffusion coefficient (Da) and free water diffusion coefficient (Do) were treated. The database, based on literatures published between 1980 and 1998, was developed considering the following points. (1) Since Japanese geological environment is focused in the second progress report, data for diffusion are collected focused on Japanese major rocks. (2) Although 22 elements are considered to be important in performance assessment for geological disposal, all elements and aquatic tracers are treated in this database development considering general purpose. (3) Since limestone, which belongs to sedimentary rock, can become one of the natural resources and is inappropriate as a host rock, it is omitted in this database development. Rock was categorized into 4 kinds of rocks; acid crystalline rock, alkaline crystalline rock, sedimentary rock (argillaceous/tuffaceous rock) and sedimentary rock (psammitic rock/sandy stone) from the viewpoint of geology and mass transport. In addition, rocks around neutrality among crystalline rock were categorized into the alkaline crystalline rock in this database. The database is composed of sub-databases for 4 kinds of rocks. Furthermore, the sub-databases for 4 kinds of the rocks are composed of databases to individual elements, in which totally, 24 items such as species, rock name, diffusion coefficients (De, Da, Do), obtained conditions (method, porewater, pH, Eh, temperature, atmosphere, etc.), etc. are input. As a result of literature survey, for De values for acid crystalline rock, totally, 207 data for 18 elements and one tracer (hydrocarbon) have been reported and all data were for granitic rocks such as granite, granodiorite and biotitic granite. For alkaline crystalline rock, totally, 32

  5. A multidisciplinary database for geophysical time series management

    Science.gov (United States)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  6. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    International Nuclear Information System (INIS)

    2011-01-01

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule≥3 mm,''''nodule<3 mm,'' and ''non-nodule≥3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule≥3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all

  7. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  8. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  9. MetReS, an Efficient Database for Genomic Applications.

    Science.gov (United States)

    Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc

    2018-02-01

    MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.

  10. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  11. JAEA thermodynamic database for performance assessment of geological disposal of high-level and TRU wastes. Refinement of thermodynamic data for tetravalent thorium, uranium, neptunium and plutonium

    International Nuclear Information System (INIS)

    Fujiwara, Kenso; Kitamura, Akira; Yui, Mikazu

    2010-03-01

    Within the scope of the JAEA thermodynamic database project for performance assessment of geological disposal of high-level and TRU radioactive wastes, the refinement of the thermodynamic data for the inorganic compounds and complexes of Thorium(IV), Uranium(IV), Neptunium(IV) and Plutonium(IV) was carried out. Refinement of thermodynamic data for the element was performed on a basis of the thermodynamic database for actinide published by the Nuclear Energy Agency in the Organisation for Economic Co-operation and Development (OECD/NEA). Additionally, the latest data after publication of thermodynamic data by OECD/NEA were reevaluated to determine whether the data should be included in the JAEA-TDB. (author)

  12. Database of Low-e Storm Window Energy Performance across U.S. Climate Zones

    Energy Technology Data Exchange (ETDEWEB)

    Culp, Thomas D.; Cort, Katherine A.

    2014-09-04

    This is an update of a report that describes process, assumptions, and modeling results produced Create a Database of U.S. Climate-Based Analysis for Low-E Storm Windows. The scope of the overall effort is to develop a database of energy savings and cost effectiveness of low-E storm windows in residential homes across a broad range of U.S. climates using the National Energy Audit Tool (NEAT) and RESFEN model calculations. This report includes a summary of the results, NEAT and RESFEN background, methodology, and input assumptions, and an appendix with detailed results and assumptions by cliamte zone.

  13. Software Tools Streamline Project Management

    Science.gov (United States)

    2009-01-01

    Three innovative software inventions from Ames Research Center (NETMARK, Program Management Tool, and Query-Based Document Management) are finding their way into NASA missions as well as industry applications. The first, NETMARK, is a program that enables integrated searching of data stored in a variety of databases and documents, meaning that users no longer have to look in several places for related information. NETMARK allows users to search and query information across all of these sources in one step. This cross-cutting capability in information analysis has exponentially reduced the amount of time needed to mine data from days or weeks to mere seconds. NETMARK has been used widely throughout NASA, enabling this automatic integration of information across many documents and databases. NASA projects that use NETMARK include the internal reporting system and project performance dashboard, Erasmus, NASA s enterprise management tool, which enhances organizational collaboration and information sharing through document routing and review; the Integrated Financial Management Program; International Space Station Knowledge Management; Mishap and Anomaly Information Reporting System; and management of the Mars Exploration Rovers. Approximately $1 billion worth of NASA s projects are currently managed using Program Management Tool (PMT), which is based on NETMARK. PMT is a comprehensive, Web-enabled application tool used to assist program and project managers within NASA enterprises in monitoring, disseminating, and tracking the progress of program and project milestones and other relevant resources. The PMT consists of an integrated knowledge repository built upon advanced enterprise-wide database integration techniques and the latest Web-enabled technologies. The current system is in a pilot operational mode allowing users to automatically manage, track, define, update, and view customizable milestone objectives and goals. The third software invention, Query

  14. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  15. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  16. Faculty Decisions on Serials Subscriptions Differ Significantly from Decisions Predicted by a Bibliometric Tool.

    Directory of Open Access Journals (Sweden)

    Sue F. Phelps

    2016-03-01

    Full Text Available Objective – To compare faculty choices of serials subscription cancellations to the scores of a bibliometric tool. Design – Natural experiment. Data was collected about faculty valuations of serials. The California Digital Library Weighted Value Algorithm (CDL-WVA was used to measure the value of journals to a particular library. These two sets of scores were then compared. Setting – A public research university in the United States of America. Subjects – Teaching and research faculty, as well as serials data. Methods – Experimental methodology was used to compare faculty valuations of serials (based on their journal cancellation choices to bibliometric valuations of the same journal titles (determined by CDL-WVA scores to identify the match rate between the faculty choices and the bibliographic data. Faculty were asked to select titles to cancel that totaled approximately 30% of the budget for their disciplinary fund code. This “keep” or “cancel” choice was the binary variable for the study. Usage data was gathered for articles downloaded through the link resolver for titles in each disciplinary dataset, and the CDL-WVA scores were determined for each journal title based on utility, quality, and cost effectiveness. Titles within each dataset were ranked highest to lowest using the CDL-WVA scores within each fund code, and then by subscription cost for titles with the same CDL-WVA score. The journal titles selected for comparison were those that ranked above the approximate 30% of titles chosen for cancellation by faculty and CDL-WVA scores. Researchers estimated an odds ratio of faculty choosing to keep a title and a CDL-WVA score that indicated the title should be kept. The p-value for that result was less than 0.0001, indicating that there was a negligible probability that the results were by chance. They also applied logistic regression to quantify the association between the numeric score of CDL-WVA and the binary variable

  17. Proposal for a high-energy nuclear database

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.

    2006-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac, AGS and SPS to RHIC and LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews. (author)

  18. Proposal for a High Energy Nuclear Database

    International Nuclear Information System (INIS)

    Brown, D A; Vogt, R

    2005-01-01

    The authors propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac, AGS and SPS to RHIC and CERN-LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, they propose periodically performing evaluations of the data and summarizing the results in topical reviews

  19. Local Physics Basis of Confinement Degradation in JET ELMy H-Mode Plasmas and Implications for Tokamak Reactors

    International Nuclear Information System (INIS)

    Budny, R.V.; Alper, B.; Borba, D.; Cordey, J.G.; Ernst, D.R.; Gowers, C.

    2001-01-01

    First results of gyrokinetic analysis of JET [Joint European Torus] ELMy [Edge Localized Modes] H-mode [high-confinement modes] plasmas are presented. ELMy H-mode plasmas form the basis of conservative performance predictions for tokamak reactors of the size of ITER [International Thermonuclear Experimental Reactor]. Relatively high performance for long duration has been achieved and the scaling appears to be favorable. It will be necessary to sustain low Z(subscript eff) and high density for high fusion yield. This paper studies the degradation in confinement and increase in the anomalous heat transport observed in two JET plasmas: one with an intense gas puff and the other with a spontaneous transition between Type I to III ELMs at the heating power threshold. Linear gyrokinetic analysis gives the growth rate, gamma(subscript lin) of the fastest growing modes. The flow-shearing rate omega(subscript ExB) and gamma(subscript lin) are large near the top of the pedestal. Their ratio decreases approximately when the confinement degrades and the transport increases. This suggests that tokamak reactors may require intense toroidal or poloidal torque input to maintain sufficiently high |gamma(subscript ExB)|/gamma(subscript lin) near the top of the pedestal for high confinement

  20. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  1. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  2. Status of the solid breeder materials database

    International Nuclear Information System (INIS)

    Billone, M.C.; Dienst, W.; Lorenzetto, P.; Noda, K.; Roux, N.

    1995-01-01

    The databases for solid breeder ceramics (Li 2 O, Li 4 SiO 4 , Li 2 ZrO 3 , and LiAlO 2 ) and beryllium multiplier material were critically reviewed and evaluated as part of the ITER/CDA design effort (1988-1990). The results have been documented in a detailed technical report. Emphasis was placed on the physical, thermal, mechanical, chemical stability/compatibility, tritium retention/release, and radiation stability properties which are needed to assess the performance of these materials in a fusion reactor environment. Materials properties correlations were selected for use in design analysis, and ranges for input parameters (e.g., temperature, porosity, etc.) were established. Also, areas for future research and development in blanket materials technology were highlighted and prioritized. For Li 2 O, the most significant increase in the database has come in the area of tritium retention as a function of operating temperature and purge flow composition. The database for postirradiation inventory from purged in-reactor samples has increased from four points to 20 points. These new data have allowed an improvement in understanding and modeling, as well as better interpretation of the results of laboratory annealing studies on unirradiated and irradiated material. In the case of Li 2 ZrO 3 , relatively little data were available on the sensitivity of the mechanical properties of this ternary ceramic to microstructure and moisture content. The increase in the database for this material has allowed not only better characterization of its properties, but also optimization of fabrication parameters to improve its performance. Some additional data are also available for the other two ternary ceramics to aid in the characterization of their performance. In particular, the thermal performance of these materials, as well as beryllium, in packed-bed form has been measured and characterized

  3. ITER solid breeder blanket materials database

    International Nuclear Information System (INIS)

    Billone, M.C.; Dienst, W.; Noda, K.; Roux, N.

    1993-11-01

    The databases for solid breeder ceramics (Li 2 ,O, Li 4 SiO 4 , Li 2 ZrO 3 and LiAlO 2 ) and beryllium multiplier material are critically reviewed and evaluated. Emphasis is placed on physical, thermal, mechanical, chemical stability/compatibility, tritium, and radiation stability properties which are needed to assess the performance of these materials in a fusion reactor environment. Correlations are selected for design analysis and compared to the database. Areas for future research and development in blanket materials technology are highlighted and prioritized

  4. Database on Demand: insight how to build your own DBaaS

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  5. Database on Demand: insight how to build your own DBaaS

    Science.gov (United States)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  6. How Do You Like Your Books: Print or Digital? An Analysis on Print and E-Book Usage at the Graduate School of Education

    Science.gov (United States)

    Haugh, Dana

    2016-01-01

    The shift from physical materials to digital holdings has slowly infiltrated libraries across the globe, and librarians are struggling to make sense of these intangible, and sometimes fleeting, resources. Materials budgets have shifted to accommodate large journal and database subscriptions, single-title article access, and, most recently, e-book…

  7. Finding Information on the State Virtual Libraries

    Science.gov (United States)

    Pappas, Marjorie L.

    2004-01-01

    The number of state virtual libraries is rapidly expanding. These virtual libraries might include collections of subscription databases; state weblinks and resources; digital collections of primary source documents; and a state union catalog or links to school, public, and academic library catalogs. Most of these virtual libraries include an…

  8. Kingfisher: a system for remote sensing image database management

    Science.gov (United States)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  9. The magnet database system

    International Nuclear Information System (INIS)

    Ball, M.J.; Delagi, N.; Horton, B.; Ivey, J.C.; Leedy, R.; Li, X.; Marshall, B.; Robinson, S.L.; Tompkins, J.C.

    1992-01-01

    The Test Department of the Magnet Systems Division of the Superconducting Super Collider Laboratory (SSCL) is developing a central database of SSC magnet information that will be available to all magnet scientists at the SSCL or elsewhere, via network connections. The database contains information on the magnets' major components, configuration information (specifying which individual items were used in each cable, coil, and magnet), measurements made at major fabrication stages, and the test results on completed magnets. These data will facilitate the correlation of magnet performance with the properties of its constituents. Recent efforts have focused on the development of procedures for user-friendly access to the data, including displays in the format of the production open-quotes travelerclose quotes data sheets, standard summary reports, and a graphical interface for ad hoc queues and plots

  10. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  11. Abstract databases in nuclear medicine; New database for articles not indexed in PubMed

    International Nuclear Information System (INIS)

    Ugrinska, A.; Mustafa, B.

    2004-01-01

    Full text: Abstract databases available on Internet free of charge were searched for nuclear medicine contents. The only comprehensive database found was PubMed. Analysis of nuclear medicine journals included in PubMed was performed. PubMed contains 25 medical journals that contain the phrase 'nuclear medicine' in different languages in their title. Searching the Internet with the search engine 'Google' we have found four more peer-reviewed journals with the phrase 'nuclear medicine' in their title. In addition, we are fully aware that many articles related to nuclear medicine are published in national medical journals devoted to general medicine. For example in year 2000 colleagues from Institute of Pathophysiology and Nuclear Medicine, Skopje, Macedonia have published 10 articles out of which none could be found on PubMed. This suggested that a big amount of research work is not accessible for the people professionally involved in nuclear medicine. Therefore, we have created a database framework for abstracts that couldn't be found in PubMed. The database is organized in user-friendly manner. There are two main sections: 'post an abstract' and 'search for abstracts'. Authors of the articles are expected to submit their work in the section 'post an abstract'. During the submission process authors should fill the separate boxes with the Title in English, Title in original language, Country of origin, Journal name, Volume, Issue and Pages. Authors should choose up to five keywords from a drop-down menu. Authors are encouraged if the abstract is not published in English to translate it. The section 'search for abstract' is searchable according to Author, Keywords, and words and phrases incorporated in the English title. The abstract database currently resides on an MS Access back-end, with a front-end in ASP (Active Server Pages). In the future, we plan to migrate the database on a MS SQL Server, which should provide a faster and more reliable framework for hosting a

  12. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  13. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  14. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  15. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  16. Evaluation of NoSQL databases for DIRAC monitoring and beyond

    Science.gov (United States)

    Mathe, Z.; Casajus Ramo, A.; Stagni, F.; Tomassetti, L.

    2015-12-01

    Nowadays, many database systems are available but they may not be optimized for storing time series data. Monitoring DIRAC jobs would be better done using a database optimised for storing time series data. So far it was done using a MySQL database, which is not well suited for such an application. Therefore alternatives have been investigated. Choosing an appropriate database for storing huge amounts of time series data is not trivial as one must take into account different aspects such as manageability, scalability and extensibility. We compared the performance of Elasticsearch, OpenTSDB (based on HBase) and InfluxDB NoSQL databases, using the same set of machines and the same data. We also evaluated the effort required for maintaining them. Using the LHCb Workload Management System (WMS), based on DIRAC as a use case we set up a new monitoring system, in parallel with the current MySQL system, and we stored the same data into the databases under test. We evaluated Grafana (for OpenTSDB) and Kibana (for ElasticSearch) metrics and graph editors for creating dashboards, in order to have a clear picture on the usability of each candidate. In this paper we present the results of this study and the performance of the selected technology. We also give an outlook of other potential applications of NoSQL databases within the DIRAC project.

  17. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked &apos

  18. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  19. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  20. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  1. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  2. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  3. Verification and Validation of Tropospheric Model/Database

    National Research Council Canada - National Science Library

    Junho, choi

    1998-01-01

    A verification and validation of tropospheric models and databases has been performed based on ray tracing algorithm, statistical analysis, test on real time system operation, and other technical evaluation process...

  4. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  5. Fact or Fiction? Libraries Can Thrive in the Digital Age

    Science.gov (United States)

    Harris, Christopher

    2014-01-01

    Today's school library uses an increasing number of digital resources to supplement a print collection that is moving more toward fiction and literary non-fiction. Supplemental resources, including streaming video, online resources, subscription databases, audiobooks, e-books, and even games, round out the new collections. Despite the best…

  6. Design issues of an efficient distributed database scheduler for telecom

    NARCIS (Netherlands)

    Bodlaender, M.P.; Stok, van der P.D.V.

    1998-01-01

    We optimize the speed of real-time databases by optimizing the scheduler. The performance of a database is directly linked to the environment it operates in, and we use environment characteristics as guidelines for the optimization. A typical telecom environment is investigated, and characteristics

  7. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  8. Performance of a TV white space database with different terrain resolutions and propagation models

    Directory of Open Access Journals (Sweden)

    A. M. Fanan

    2017-11-01

    Full Text Available Cognitive Radio has now become a realistic option for the solution of the spectrum scarcity problem in wireless communication. TV channels (the primary user can be protected from secondary-user interference by accurate prediction of TV White Spaces (TVWS by using appropriate propagation modelling. In this paper we address two related aspects of channel occupancy prediction for cognitive radio. Firstly we investigate the best combination of empirical propagation model and spatial resolution of terrain data for predicting TVWS by examining the performance of three propagation models (Extended-Hata, Davidson-Hata and Egli in the TV band 470 to 790 MHz along with terrain data resolutions of 1000, 100 and 30 m, when compared with a comprehensive set of propagation measurements taken in randomly-selected locations around Hull, UK. Secondly we describe how such models can be integrated into a database-driven tool for cognitive radio channel selection within the TVWS environment.

  9. Databases for rRNA gene profiling of microbial communities

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Matthew

    2013-07-02

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  10. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  11. Stress Testing of Transactional Database Systems

    OpenAIRE

    Meira , Jorge Augusto; Cunha De Almeida , Eduardo; Sunyé , Gerson; Le Traon , Yves; Valduriez , Patrick

    2013-01-01

    International audience; Transactional database management systems (DBMS) have been successful at supporting traditional transaction processing workloads. However, web-based applications that tend to generate huge numbers of concurrent business operations are pushing DBMS performance over their limits, thus threatening overall system availability. Then, a crucial question is how to test DBMS performance under heavy workload conditions. Answering this question requires a testing methodology to ...

  12. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  13. Solid waste projection model: Database user's guide (Version 1.0)

    International Nuclear Information System (INIS)

    Carr, F.; Stiles, D.

    1991-01-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions, and does not provide instructions in the use of Paradox, the database management system in which the SWPM database is established. 3 figs., 1 tab

  14. Relational databases for conditions data and event selection in ATLAS

    International Nuclear Information System (INIS)

    Viegas, F; Hawkings, R; Dimitrov, G

    2008-01-01

    The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of interesting events. Both of these databases are being replicated to regional computing centres using Oracle Streams technology, in collaboration with the LCG 3D project. Database optimisation, performance tests and first user experience with these applications will be described, together with plans for first LHC data-taking and future prospects

  15. Relational databases for conditions data and event selection in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Hawkings, R; Dimitrov, G [CERN, CH-1211 Geneve 23 (Switzerland)

    2008-07-15

    The ATLAS experiment at LHC will make extensive use of relational databases in both online and offline contexts, running to O(TBytes) per year. Two of the most challenging applications in terms of data volume and access patterns are conditions data, making use of the LHC conditions database, COOL, and the TAG database, that stores summary event quantities allowing a rapid selection of interesting events. Both of these databases are being replicated to regional computing centres using Oracle Streams technology, in collaboration with the LCG 3D project. Database optimisation, performance tests and first user experience with these applications will be described, together with plans for first LHC data-taking and future prospects.

  16. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  17. Evaluation of relational database products for the VAX

    International Nuclear Information System (INIS)

    Kannan, K.L.

    1985-11-01

    Four commercially available database products for the VAX/VMS operating system were evaluated for relative performance and ease of use. The products were DATATRIEVE, INGRES, Rdb, and S1032. Performance was measured in terms of elapsed time, CPU time, direct I/O counts, buffered I/O counts, and page faults. East of use is more subjective and has not been quantified here; however, discussion and tables of features as well as query syntax are included. This report describes the environment in which these products were evaluated and the characteristics of the databases used. All comparisons must be interpreted in the context of this setting

  18. Evaluation of relational database products for the VAX

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, K.L.

    1985-11-01

    Four commercially available database products for the VAX/VMS operating system were evaluated for relative performance and ease of use. The products were DATATRIEVE, INGRES, Rdb, and S1032. Performance was measured in terms of elapsed time, CPU time, direct I/O counts, buffered I/O counts, and page faults. East of use is more subjective and has not been quantified here; however, discussion and tables of features as well as query syntax are included. This report describes the environment in which these products were evaluated and the characteristics of the databases used. All comparisons must be interpreted in the context of this setting.

  19. Performance monitoring in hip fracture surgery--how big a database do we really need?

    Science.gov (United States)

    Edwards, G A D; Metcalfe, A J; Johansen, A; O'Doherty, D

    2010-04-01

    Systems for collecting information about patient care are increasingly common in orthopaedic practice. Databases can allow various comparisons to be made over time. Significant decisions regarding service delivery and clinical practice may be made based on their results. We set out to determine the number of cases needed for comparison of 30-day mortality, inpatient wound infection rates and mean hospital length of stay, with a power of 80% for the demonstration of an effect at a significance level of pdata on 1050 hip fracture patients admitted to a city teaching hospital. Detection of a 10% difference in 30-day mortality would require 14,065 patients in each arm of any comparison, demonstration of a 50% difference would require 643 patients in each arm; for wound infections, demonstration of a 10% difference in incidence would require 23,921 patients in each arm and 1127 patients for demonstration of a 50% difference; for length of stay, a difference of 10% would require 1479 patients and 6660 patients for a 50% difference. This study demonstrates the importance of considering the population sizes before comparisons are made on the basis of basic hip fracture outcome data. Our data also help illustrate the impact of sample size considerations when interpreting the results of performance monitoring. Many researchers will be used to the fact that rare outcomes such as inpatient mortality or wound infection require large sample sizes before differences can be reliably demonstrated between populations. This study gives actual figures that researchers could use when planning studies. Statistically meaningful analyses will only be possible with major multi-centre collaborations, as will be possible if hospital Trusts participate in the National Hip Fracture Database. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  20. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  1. Global Ocean Surface Water Partial Pressure of CO2 Database: Measurements Performed During 1968-2007 (Version 2007)

    Energy Technology Data Exchange (ETDEWEB)

    Kozyr, Alex [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Carbon Dioxide Information Analysis Center

    2008-09-30

    More than 4.1 million measurements of surface water partial pressure of CO2 obtained over the global oceans during 1968-2007 are listed in the Lamont-Doherty Earth Observatory (LDEO) database, which includes open ocean and coastal water measurements. The data assembled include only those measured by equilibrator-CO2 analyzer systems and have been quality-controlled based on the stability of the system performance, the reliability of calibrations for CO2 analysis, and the internal consistency of data. To allow re-examination of the data in the future, a number of measured parameters relevant to pCO2 measurements are listed. The overall uncertainty for the pCO2 values listed is estimated to be ± 2.5 µatm on the average. For simplicity and for ease of reference, this version is referred to as 2007, meaning that data collected through 31 December 2007 has been included. It is our intention to update this database annually. There are 37 new cruise/ship files in this update. In addition, some editing has been performed on existing files so this should be considered a V2007 file. Also we have added a column reporting the partial pressure of CO2 in seawater in units of Pascals. The data presented in this database include the analyses of partial pressure of CO2 (pCO2), sea surface temperature (SST), sea surface salinity (SSS), pressure of the equilibration, and barometric pressure in the outside air from the ship’s observation system. The global pCO2 data set is available free of charge as a numeric data package (NDP) from the Carbon Dioxide Information Analysis Center (CDIAC). The NDP consists of the oceanographic data files and this printed documentation, which describes the procedures and methods used to obtain the data.

  2. License - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database License to Use This Database Last updated : 2010/02/15 You may use this database...nal License described below. The Standard License specifies the license terms regarding the use of this database... and the requirements you must follow in using this database. The Additional ...the Standard License. Standard License The Standard License for this database is the license specified in th...e Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database

  3. The Danish Cardiac Rehabilitation Database.

    Science.gov (United States)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.

  4. Persistent storage of non-event data in the CMS databases

    International Nuclear Information System (INIS)

    De Gruttola, M; Di Guida, S; Innocente, V; Schlatter, D; Futyan, D; Glege, F; Paolucci, P; Govi, G; Picca, P; Pierro, A; Xie, Z

    2010-01-01

    In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first experience obtained during the 2008 and 2009 cosmic data taking are presented.

  5. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Science.gov (United States)

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  6. Design and evaluation of a NoSQL database for storing and querying RDF data

    Directory of Open Access Journals (Sweden)

    Kanda Runapongsa Saikaew

    2014-12-01

    Full Text Available Currently the amount of web data has increased excessively. Its metadata is widely used in order to fully exploit web information resources. This causes the need for Semantic Web technology to quickly analyze such big data. Resource Description Framework (RDF is a standard for describing web resources. In this paper, we propose a method to exploit a NoSQL database, specifically MongoDB, to store and query RDF data. We choose MongoDB to represent a NoSQL database because it is one of the most popular high-performance NoSQL databases. We evaluate the proposed design and implementation by using the Berlin SPARQL Benchmark, which is one of the most widely accepted benchmarks for comparing the performance of RDF storage systems. We compare three database systems, which are Apache Jena TDB (native RDF store, MySQL (relational database, and our proposed system with MongoDB (NoSQL database. Based on the experimental results analysis, our proposed system outperforms other database systems for most queries when the data set size is small. However, for a larger data set, MongoDB performs well for queries with simple operators while MySQL offers an efficient solution for complex queries. The result of this work can provide some guideline for choosing an appropriate RDF database system and applying a NoSQL database in storing and querying RDF data.

  7. DABAM: an open-source database of X-ray mirrors metrology

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, Manuel, E-mail: srio@esrf.eu [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Bianchi, Davide [AC2T Research GmbH, Viktro-Kaplan-Strasse 2-C, 2700 Wiener Neustadt (Austria); Cocco, Daniele [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Glass, Mark [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Idir, Mourad [NSLS II, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Metz, Jim [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Raimondi, Lorenzo; Rebuffi, Luca [Elettra-Sincrotrone Trieste SCpA, Basovizza (TS) (Italy); Reininger, Ruben; Shi, Xianbo [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Siewert, Frank [BESSY II, Helmholtz Zentrum Berlin, Institute for Nanometre Optics and Technology, Albert-Einstein-Strasse 15, 12489 Berlin (Germany); Spielmann-Jaeggi, Sibylle [Swiss Light Source at Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Takacs, Peter [Instrumentation Division, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); Tomasset, Muriel [Synchrotron Soleil (France); Tonnessen, Tom [InSync Inc., 2511C Broadbent Parkway, Albuquerque, NM 87107 (United States); Vivo, Amparo [ESRF - The European Synchrotron, 71 Avenue des Martyrs, 38000 Grenoble (France); Yashchuk, Valeriy [Advanced Light Source, Lawrence Berkeley National Laboratory, MS 15-R0317, 1 Cyclotron Road, Berkeley, CA 94720-8199 (United States)

    2016-04-20

    DABAM, an open-source database of X-ray mirrors metrology to be used with ray-tracing and wave-propagation codes for simulating the effect of the surface errors on the performance of a synchrotron radiation beamline. An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.

  8. IMPLEMENTATION OF COLUMN-ORIENTED DATABASE IN POSTGRESQL FOR OPTIMIZATION OF READ-ONLY QUERIES

    OpenAIRE

    Aditi D. Andurkar

    2012-01-01

    The era of column-oriented database systems has truly begun with open source database systems like C-Store, MonetDb, LucidDb and commercial ones like Vertica. Column-oriented database stores data column-by-column which means it stores information of single attribute collectively. The need for Column-oriented database arose from the need of business intelligence for efficient decision making where traditional row-oriented database gives poor performance. PostgreSql is an open so...

  9. Cloud Computing and Your Library

    Science.gov (United States)

    Mitchell, Erik T.

    2010-01-01

    One of the first big shifts in how libraries manage resources was the move from print-journal purchasing models to database-subscription and electronic-journal purchasing models. Libraries found that this transition helped them scale their resources and provide better service just by thinking a bit differently about their services. Likewise,…

  10. Changing State Digital Libraries

    Science.gov (United States)

    Pappas, Marjorie L.

    2006-01-01

    Research has shown that state virtual or digital libraries are evolving into websites that are loaded with free resources, subscription databases, and instructional tools. In this article, the author explores these evolving libraries based on the following questions: (1) How user-friendly are the state digital libraries?; (2) How do state digital…

  11. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  12. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    Directory of Open Access Journals (Sweden)

    Qingyu Chen

    Full Text Available First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases.We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  13. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    Science.gov (United States)

    Chen, Qingyu; Zobel, Justin; Zhang, Xiuzhen; Verspoor, Karin

    2016-01-01

    First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases. We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  14. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  15. The LOCUS interface to the MFE database

    International Nuclear Information System (INIS)

    Miner, W.H. Jr.

    1991-01-01

    The MFE database now consists of over 900 shots from TFTR, PDX, PLT, T-10, JT-60, TEXT, JET and ASDEX. A variety of discharge conditions is represented, ranging from single time slice Ohmic discharges to multiple time-slice auxiliary heated discharges. Included with most datasets is a reference that describes the experiment being performed when the data was taken. The MFE database is currently implemented under INGRES on a VAX that is on Internet. LOCUS, a database utility, developed at the Princeton Plasma Physics Laboratory is now available as an interface to the database. The LOCUS front end provides a graphic interface to the database from any generic graphics terminal that supports Tektronix 4010 emulation. It provides a variety of procedures for extracting, manipulating and graphing data from the MFE database. In order to demonstrate the capabilities of the LOCUS interface, the authors examine, in detail, one of the recently added JET, H-mode discharges. In this example, they address some new concepts such as monitor functions, which have been introduced in order to help users more fully understand the multiple time-slice datasets. They also describe some of the more advanced techniques available in LOCUS for data access and manipulation. Specific areas of interest that are discussed are searching for and retrieving datasets, graphics, data fitting, and linear regression analysis

  16. Implementation of Secondary Index on Cloud Computing NoSQL Database in Big Data Environment

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2015-01-01

    Full Text Available This paper introduces the combination of NoSQL database HBase and enterprise search platform Solr so as to tackle the problem of the secondary index function with fast query. In order to verify the effectiveness and efficiency of the proposed approach, the assessment using Cost-Performance ratio has been done for several competitive benchmark databases and the proposed one. As a result, our proposed approach outperforms the other databases and fulfills secondary index function with fast query in NoSQL database. Moreover, according to the cross-sectional analysis, the proposed combination of HBase and Solr database is capable of performing an excellent query/response in a big data environment.

  17. Full Data of Yeast Interacting Proteins Database (Original Version) - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Full Data of Yeast Interacting Proteins Database (Origin...al Version) Data detail Data name Full Data of Yeast Interacting Proteins Database (Original Version) DOI 10....18908/lsdba.nbdc00742-004 Description of data contents The entire data in the Yeast Interacting Proteins Database...eir interactions are required. Several sources including YPD (Yeast Proteome Database, Costanzo, M. C., Hoga...ematic name in the SGD (Saccharomyces Genome Database; http://www.yeastgenome.org /). Bait gene name The gen

  18. Construction of database server system for fuel thermo-physical properties

    International Nuclear Information System (INIS)

    Park, Chang Je; Kang, Kwon Ho; Song, Kee Chan

    2003-12-01

    To perform the evaluation of various fuels in the nuclear reactors, not only the mechanical properties but also thermo-physical properties are required as one of most important inputs for fuel performance code system. The main objective of this study is to make a database system for fuel thermo-physical properties and a PC-based hardware system has been constructed for ease use for the public with visualization such as web-based server system. This report deals with the hardware and software which are used in the database server system for nuclear fuel thermo-physical properties. It is expected to be highly useful to obtain nuclear fuel data without such a difficulty through opening the database of fuel properties to the public and is also helpful to research of development of various fuel of nuclear industry. Furthermore, the proposed models of nuclear fuel thermo-physical properties will be enough utilized to the fuel performance code system

  19. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  20. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    Science.gov (United States)

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  1. Using a Semi-Realistic Database to Support a Database Course

    Science.gov (United States)

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  2. Current status of system development to provide databases of nuclides migration

    International Nuclear Information System (INIS)

    Sasamoto, Hiroshi; Yoshida, Yasushi; Isogai, Takeshi; Suyama, Tadahiro; Shibata, Masahiro; Yui, Mikazu; Jintoku, Takashi

    2005-01-01

    JNC has developed databases of nuclides migration for safety assessment of high-level radioactive waste (HLW) repository, and they have been used in the second progress report to present the technical reliability of HLW geological disposal system in Japan. The technical level and applicability of databases have been highly evaluated even overseas. To provide the databases broadly over the world and to promote the use of the databases, we have performed the followings: 1) development of tools to convert the database format from geochemical code PHREEQE to PHREEQC, GWB and EQ3/6 and 2) set up a web site (http://migrationdb.jnc.go.jp) which enables the public to access to the databases. As a result, the number of database users has significantly increased. Additionally, a number of useful comments from the users can be applied to modification and/or update of databases. (author)

  3. A Transactional Asynchronous Replication Scheme for Mobile Database Systems

    Institute of Scientific and Technical Information of China (English)

    丁治明; 孟小峰; 王珊

    2002-01-01

    In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP)model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.

  4. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  5. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  6. NoSQL technologies for the CMS Conditions Database

    Science.gov (United States)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  7. Development and implementation of an institutional repository within a Science, Engineering and Technology (SET) environment

    CSIR Research Space (South Africa)

    Van der Merwe, Adèle

    2008-10-01

    Full Text Available -based searches. The scholarly federated search engine of Google (http://scholar.google.com) has been used extensively but not exclusively. Subscription databases such as ISI’s Web of Knowledge were also used. An analysis of the exiting proprietary database... internal controls to prevent unauthorized changes. • Registration of the IR with search engines and service providers such as Google, OAIster and DOAR demands that the IR manager keep abreast with developments in terms of suitable search engines...

  8. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  9. Comparative research performance of top universities from the northeastern Brazil on three pharmacological disciplines as seen in scopus database

    Directory of Open Access Journals (Sweden)

    Jean P. Kamdem, PhD

    2017-12-01

    Full Text Available Objectives: Postgraduate programmes around the world are periodically subjected to research performance evaluation through bibliometric indicators. In this research, we characterized and compared the research performance of 15 universities from Northeastern Brazil, in which 13 were among the top Universities of the Latin America. Methods: Specifically, total documents, citations and the h-index of each university were retrieved from the Elsevier Scopus database and were analysed not only for historical scientific achievement but also across the period of the past 6 years (2010–2015. Using these bibliometric indicators, we also investigated the performance of programmes at these Universities that have their papers indexed in the Scopus database under the category of “Pharmacology, Toxicology and Pharmaceuticals” for the same period. Results: We found that the Federal University of Pernambuco (UFPE and the Federal University of Ceará (UFC were the most productive institutions, producing 17847 and 15048 documents, respectively. The number of papers published by each of these universities in the past six years represented more than 50% of their entire productivity. With regards to their scientific output in “Pharmacology, Toxicology and Pharmaceutics”, UFC showed the highest number of published documents followed by UFPE and the Federal University of Paraíba (UFPB. UFC received the highest h-index (with and without self-citations and number of citations and shared their most cited papers with foreign institutions from the USA and Germany. However, papers from UFC were published in journals with lower impact factors (2.322. Conclusions: The present study shows where each of these universities stands and can be helpful in identifying potential collaborators in these areas of knowledge. Keywords: Citations, CNPq, h-index, Northeastern Brazil, UFC

  10. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna; Tramontano, Anna; Marcatili, Paolo

    2011-01-01

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  11. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  12. Update History of This Database - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RED Update History of This Database Date Update contents 2015/12/21 Rice Expression Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RED | LSDB Archive ... ...ve site is opened. 2000/10/1 Rice Expression Database ( http://red.dna.affrc.go.jp/RED/ ) is opened. About Thi

  13. GRIP Database original data - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...PDB GRIP Database original data Data detail Data name GRIP Database original data DOI 10....18908/lsdba.nbdc01665-006 Description of data contents GRIP Database original data It consists of data table...s and sequences. Data file File name: gripdb_original_data.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...e Database Description Download License Update History of This Database Site Policy | Contact Us GRIP Database original data - GRIPDB | LSDB Archive ...

  14. Update History of This Database - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RPD Update History of This Database Date Update contents 2016/02/02 Rice Proteome Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RPD | LSDB Archive ... ...ve site is opened. 2003/01/07 Rice Proteome Database ( http://gene64.dna.affrc.go.jp/RPD/ ) is opened. About Thi

  15. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  16. Emotion recognition from speech by combining databases and fusion of classifiers

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.J.M.; Wiggers, P.; Leeuwen, D.A. van

    2010-01-01

    We explore possibilities for enhancing the generality, portability and robustness of emotion recognition systems by combining data-bases and by fusion of classifiers. In a first experiment, we investigate the performance of an emotion detection system tested on a certain database given that it is

  17. Analisis Performansi Database Ditinjau dari Aspek Optimasi Query dan Desain Model Data Relational pada DAS dan RAID

    OpenAIRE

    Lubis, Juanda Hakim

    2015-01-01

    The amount of data that is stored in magnetic disk (floppy disk, harddisk, etc) increases 100% each year for each department for each company so an effort to maintain a database system to be optimal is needed. Designing a database is the initial step when creating a system with an optimal database performance. However, just designing the database is not enough to increase the performance of the database.One of the ways is to increase the speed of data transaction by increaseing...

  18. Selective Document Retrieval from Encrypted Database

    NARCIS (Netherlands)

    Bösch, C.T.; Tang, Qiang; Hartel, Pieter H.; Jonker, Willem

    We propose the concept of selective document retrieval (SDR) from an encrypted database which allows a client to store encrypted data on a third-party server and perform efficient search remotely. We propose a new SDR scheme based on the recent advances in fully homomorphic encryption schemes. The

  19. The CMS ECAL database services for detector control and monitoring

    International Nuclear Information System (INIS)

    Arcidiacono, Roberta; Marone, Matteo; Badgett, William

    2010-01-01

    In this paper we give a description of the database services for the control and monitoring of the electromagnetic calorimeter of the CMS experiment at LHC. After a general description of the software infrastructure, we present the organization of the tables in the database, that has been designed in order to simplify the development of software interfaces. This feature is achieved including in the database the description of each relevant table. We also give some estimation about the final size and performance of the system.

  20. A comparative study of six European databases of medically oriented Web resources.

    Science.gov (United States)

    Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes

    2005-10-01

    The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.

  1. P2-35: The KU Facial Expression Database: A Validated Database of Emotional and Conversational Expressions

    Directory of Open Access Journals (Sweden)

    Haenah Lee

    2012-10-01

    Full Text Available Facial expressions are one of the most important means of nonverbal communication transporting both emotional and conversational content. For investigating this large space of expressions we recently developed a large database containing dynamic emotional and conversational expressions in Germany (MPI facial expression database. As facial expressions crucially depend on the cultural context, however, a similar resource is needed for studies outside of Germany. Here, we introduce and validate a new, extensive Korean facial expression database containing dynamic emotional and conversational information. Ten individuals performed 62 expressions following a method-acting protocol, in which each person was asked to imagine themselves in one of 62 corresponding everyday scenarios and to react accordingly. To validate this database, we conducted two experiments: 20 participants were asked to name the appropriate expression for each of the 62 everyday scenarios shown as text. Ten additional participants were asked to name each of the 62 expression videos from 10 actors in addition to rating its naturalness. All naming answers were then rated as valid or invalid. Scenario validation yielded 89% valid answers showing that the scenarios are effective in eliciting appropriate expressions. Video sequences were judged as natural with an average of 66% valid answers. This is an excellent result considering that videos were seen without any conversational context and that 62 expressions were to be recognized. These results validate our Korean database and, as they also parallel the German validation results, will enable detailed cross-cultural comparisons of the complex space of emotional and conversational expressions.

  2. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  3. The GIOD Project-Globally Interconnected Object Databases

    CERN Document Server

    Bunn, J J; Newman, H B; Wilkinson, R P

    2001-01-01

    The GIOD (Globally Interconnected Object Databases) Project, a joint effort between Caltech and CERN, funded by Hewlett Packard Corporation, has investigated the use of WAN-distributed Object Databases and Mass Storage systems for LHC data. A prototype small- scale LHC data analysis center has been constructed using computing resources at Caltechs Centre for advanced Computing Research (CACR). These resources include a 256 CPU HP Exemplar of ~4600 SPECfp95, a 600 TByte High Performance Storage System (HPSS), and local/wide area links based on OC3 ATM. Using the exemplar, a large number of fully simulated CMS events were produced, and used to populate an object database with a complete schema for raw, reconstructed and analysis objects. The reconstruction software used for this task was based on early codes developed in preparation for the current CMS reconstruction program, ORCA. (6 refs).

  4. NIRS database of the original research database

    International Nuclear Information System (INIS)

    Morita, Kyoko

    1991-01-01

    Recently, library staffs arranged and compiled the original research papers that have been written by researchers for 33 years since National Institute of Radiological Sciences (NIRS) established. This papers describes how the internal database of original research papers has been created. This is a small sample of hand-made database. This has been cumulating by staffs who have any knowledge about computer machine or computer programming. (author)

  5. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    Science.gov (United States)

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  6. Plan for Developing a Materials Performance Database for the Texas Department of Transportation

    Science.gov (United States)

    1999-09-01

    The materials used within the Texas Department of Transportation (TxDOT) are undergoing a period of change. The purpose of this report is to develop the information necessary to develop (for TxDOT) a method or a database for monitoring the performanc...

  7. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  8. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  9. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  10. Statistical Measures Alone Cannot Determine Which Database (BNI, CINAHL, MEDLINE, or EMBASE Is the Most Useful for Searching Undergraduate Nursing Topics. A Review of: Stokes, P., Foster, A., & Urquhart, C. (2009. Beyond relevance and recall: Testing new user-centred measures of database performance. Health Information and Libraries Journal, 26(3, 220-231.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-03-01

    Full Text Available Objective – The research project sought to determine which of four databases was the most useful for searching undergraduate nursing topics. Design – Comparative database evaluation. Setting – Nursing and midwifery students at Homerton School of Health Studies (now part of Anglia Ruskin University, Cambridge, United Kingdom, in 2005-2006. Subjects – The subjects were four databases: British Nursing Index (BNI, CINAHL, MEDLINE, and EMBASE.Methods – This was a comparative study using title searches to compare BNI (BritishNursing Index, CINAHL, MEDLINE and EMBASE.According to the authors, this is the first study to compare BNI with other databases. BNI is a database produced by British libraries that indexes the nursing and midwifery literature. It covers over 240 British journals, and includes references to articles from health sciences journals that are relevant to nurses and midwives (British Nursing Index, n.d..The researchers performed keyword searches in the title field of the four databases for the dissertation topics of nine nursing and midwifery students enrolled in undergraduate dissertation modules. The list of titles of journals articles on their topics were given to the students and they were asked to judge the relevancy of the citations. The title searches were evaluated in each of the databases using the following criteria: • precision (the number of relevant results obtained in the database for a search topic, divided by the total number of results obtained in the database search;• recall (the number of relevant results obtained in the database for a search topic, divided by the total number of relevant results obtained on that topic from all four database searches;• novelty (the number of relevant results that were unique in the database search, which was calculated as a percentage of the total number of relevant results found in the database;• originality (the number of unique relevant results obtained in the

  11. Automatic pattern localization across layout database and photolithography mask

    Science.gov (United States)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  12. Pertukaran Data Antar Database Dengan Menggunakan Teknologi API

    Directory of Open Access Journals (Sweden)

    Ahmad Hanafi

    2017-03-01

    Full Text Available Electronically data interchange between institutions or companies must be supported with appropriate data storage media capacity. MySQL is a database engine that is used to perform data storage in the form of information, where the data can be utilized as needed. MYSQL has the advantage of which is to provide convenience in terms of usage, and able to work on different platforms. System requirements that must be reliable and multitasking capable of making the database not only as a data storage medium, but can also be utilized as a means of data exchange. Dropbox API is the best solution that can be utilized as a technology that supports the database to be able to Exchange data. The combination of the Dropbox API and database can be used as a very cheap solution for small companies to implement data exchange, because it only requires a relatively small Internet connection.

  13. Inleiding database-systemen

    NARCIS (Netherlands)

    Pels, H.J.; Lans, van der R.F.; Pels, H.J.; Meersman, R.A.

    1993-01-01

    Dit artikel introduceert de voornaamste begrippen die een rol spelen rond databases en het geeft een overzicht van de doelstellingen, de functies en de componenten van database-systemen. Hoewel de functie van een database intuitief vrij duidelijk is, is het toch een in technologisch opzicht complex

  14. [(123)I]FP-CIT ENC-DAT normal database

    DEFF Research Database (Denmark)

    Tossici-Bolt, Livia; Dickson, John C; Sera, Terez

    2017-01-01

    quantifications methods, BRASS and Southampton, and explores the performance of the striatal phantom calibration in their harmonisation. RESULTS: BRASS and Southampton databases comprising 123 ENC-DAT subjects, from gamma cameras with parallel collimators, were reconstructed using filtered back projection (FBP......) and iterative reconstruction OSEM without corrections (IRNC) and compared against the recommended OSEM with corrections for attenuation and scatter and septal penetration (ACSC), before and after applying phantom calibration. Differences between databases were quantified using the percentage difference......-camera variability (-0.2%, p = 0.44). CONCLUSIONS: The ENC-DAT reference values are significantly dependent on the reconstruction and quantification methods and phantom calibration, while reducing the major part of their differences, is unable to fully harmonize them. Clinical use of any normal database, therefore...

  15. [(123)I]FP-CIT ENC-DAT normal database

    DEFF Research Database (Denmark)

    Tossici-Bolt, Livia; Dickson, John C; Sera, Terez

    2017-01-01

    BACKGROUND: [(123)I]FP-CIT is a well-established radiotracer for the diagnosis of dopaminergic degenerative disorders. The European Normal Control Database of DaTSCAN (ENC-DAT) of healthy controls has provided age and gender-specific reference values for the [(123)I]FP-CIT specific binding ratio...... quantifications methods, BRASS and Southampton, and explores the performance of the striatal phantom calibration in their harmonisation. RESULTS: BRASS and Southampton databases comprising 123 ENC-DAT subjects, from gamma cameras with parallel collimators, were reconstructed using filtered back projection (FBP......) and iterative reconstruction OSEM without corrections (IRNC) and compared against the recommended OSEM with corrections for attenuation and scatter and septal penetration (ACSC), before and after applying phantom calibration. Differences between databases were quantified using the percentage difference...

  16. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  17. The relational database system of KM3NeT

    Science.gov (United States)

    Albert, Arnauld; Bozza, Cristiano

    2016-04-01

    The KM3NeT Collaboration is building a new generation of neutrino telescopes in the Mediterranean Sea. For these telescopes, a relational database is designed and implemented for several purposes, such as the centralised management of accounts, the storage of all documentation about components and the status of the detector and information about slow control and calibration data. It also contains information useful during the construction and the data acquisition phases. Highlights in the database schema, storage and management are discussed along with design choices that have impact on performances. In most cases, the database is not accessed directly by applications, but via a custom designed Web application server.

  18. A Look at Librarianship through the Lens of an Academic Library Serials Review

    Directory of Open Access Journals (Sweden)

    Annette Day

    2009-07-01

    Full Text Available Talk to any librarian or library vendor and you’ll hear the same thing – the global economic downturn is hitting hard. Libraries everywhere are taking an axe to their collections; libraries are cutting book budgets, canceling serials subscriptions, allowing institutional memberships to lapse, and letting go of databases. Libraries and their stakeholders are having to [...

  19. Defense Against National Vulnerabilities in Public Data

    Science.gov (United States)

    2017-02-28

    ingestion of subscription based precision data sources ( Business Intelligence Databases, Monster, others).  Flexible data architecture that allows for...components of program execution from technical design to business and program management, to ensure timely and complete execution of contract...type. Satellite Imagery Adapters Planet and Carto Adapter Juno Services Vulnerability Assessment Framework Computer Model Facebook Graph API

  20. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    Science.gov (United States)

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.