WorldWideScience

Sample records for elist database instance

  1. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database instance segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Instance Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to change the password for the SYSTEM account of the database instance after the instance is created, and it discusses the creation of a suitable database instance for ELIST by means other than the installation of the segment. The latter subject is covered in more depth than its introductory discussion in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. By the same token, the need to create a database instance for ELIST by means other than the installation of the segment is expected to be the exception, rather than the rule. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  2. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  3. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to extend the database storage available to Oracle if a datastore becomes filled during the use of ELIST. The latter subject builds on some of the actions that must be performed when installing this segment, as documented in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. The need to extend database storage likewise typically arises infrequently. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  4. Installation procedures (IP) for the enhanced logistics intratheater support tool (ELIST) global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the Installation Procedures (IP) for the DII COE Enhanced Logistics Intraheater Support Tool (ELIST) mission application. It tells how to install and deinstall the seven segments of the mission application

  5. Introduction to the enhanced logistics intratheater support tool (ELIST) mission application and its segments : global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    The ELIST mission application simulates and evaluates the feasibility of intratheater transportation logistics primarily for the theater portion of a course of action. It performs a discrete event simulation of a series of movement requirements over a constrained infrastructure network using specified transportation assets. ELIST addresses the question of whether transportation infrastructures and lift allocations are adequate to support the movements of specific force structures and supplies to their destinations on time

  6. ELIST8: simulating military deployments in Java

    International Nuclear Information System (INIS)

    Van Groningen, C. N.; Blachowicz, D.; Braun, M. D.; Simunich, K. L.; Widing, M. A.

    2002-01-01

    Planning for the transportation of large amounts of equipment, troops, and supplies presents a complex problem. Many options, including modes of transportation, vehicles, facilities, routes, and timing, must be considered. The amount of data involved in generating and analyzing a course of action (e.g., detailed information about military units, logistical infrastructures, and vehicles) is enormous. Software tools are critical in defining and analyzing these plans. Argonne National Laboratory has developed ELIST (Enhanced Logistics Intra-theater Support Tool), a simulation-based decision support system, to assist military planners in determining the logistical feasibility of an intra-theater course of action. The current version of ELIST (v.8) contains a discrete event simulation developed using the Java programming language. Argonne selected Java because of its object-oriented framework, which has greatly facilitated entity and process development within the simulation, and because it fulfills a primary requirement for multi-platform execution. This paper describes the model, including setup and analysis, a high-level architectural design, and an evaluation of Java

  7. ELIST v.8.1 : User's Manual.; TOPICAL

    International Nuclear Information System (INIS)

    Van Groningen, Blachowicz D.; Duffy Braun, M.; Clemmons, M. A.; Simunich, K. L.; Timmerman, D.; VanderZee, H.; Widing, M. A.

    2002-01-01

    This user's manual documents the capabilities and functions of the Enhanced Logistics Intratheater Support Tool (ELIST) software application. Steps for using the Expanded Time Phase Force Deployment Data (ETPFDD) Editor (ETEdit), which is included in ELIST but is also a stand-alone software application, are contained in a separate document. ELIST is a discrete event simulation tool developed for use by military planners in both the continental United States (CONUS) and outside the continental United States (OCONUS). It simulates the reception, staging, onward movement, and integration (RSOI) of military personnel and equipment from all services within, between, or among countries. ELIST not only runs a simulation, but it also provides the capability to edit asset sets, networks, and scenarios. These capabilities show how various changes can affect the outcome of a simulation. Further, ELIST incorporates topographic maps on which the network is displayed. The system also allows planners to simulate scenarios at the vehicle level. Prior to the implementation of ELIST, planners were able to simulate military deployment from the point of departure (origin) to the point of arrival in the theater (the port of debarkation). Since the development and implementation of ELIST, however, planners can simulate military deployment from the point of departure (airport or seaport), through the staging area, through the theater-staging base, to the final destination. A typical scenario might be set up to transport personnel and cargo to a location by aircraft or ship. Upon arrival at the airport or seaport, the cargo would be sent to a staging area where it would be set up and transferred to a vehicle, or in the case of petroleum, oil, and lubricants (POL), a pipeline. The vehicle then would transport the cargo to the theater-staging base where it would ''marry up'' with the main body of personnel. From this point, the cargo and the main body would be transported to the final

  8. Multiple Instance Fuzzy Inference

    Science.gov (United States)

    2015-12-02

    and learn the fuzzy inference system’s parameters [24, 25]. In this later technique, supervised and unsupervised learning algorithms are devised to...algorithm ( unsupervised learning ) can be used to identify local contexts of the input space, and a linear classifier (supervised learning ) can be used...instance level (patch-level) labels and would require the image to be correctly segmented and labeled prior to learning . Figure 1.1: Example of an image

  9. Instance-Based Ontology Matching by Instance Enrichment

    NARCIS (Netherlands)

    Schopman, B.; Wang, S.; Isaac, A.H.J.C.A.; Schlobach, K.S.

    2012-01-01

    The ontology matching (OM) problem is an important barrier to achieve true Semantic Interoperability. Instance-based ontology matching (IBOM) uses the extension of concepts, the instances directly associated with a concept, to determine whether a pair of concepts is related or not. While IBOM has

  10. Instance annotation for multi-instance multi-label learning

    Science.gov (United States)

    F. Briggs; X.Z. Fern; R. Raich; Q. Lou

    2013-01-01

    Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen...

  11. Multiple-instance learning with pairwise instance similarity

    Directory of Open Access Journals (Sweden)

    Yuan Liming

    2014-09-01

    Full Text Available Multiple-Instance Learning (MIL has attracted much attention of the machine learning community in recent years and many real-world applications have been successfully formulated as MIL problems. Over the past few years, several Instance Selection-based MIL (ISMIL algorithms have been presented by using the concept of the embedding space. Although they delivered very promising performance, they often require long computation times for instance selection, leading to a low efficiency of the whole learning process. In this paper, we propose a simple and efficient ISMIL algorithm based on the similarity of pairwise instances within a bag. The basic idea is selecting from every training bag a pair of the most similar instances as instance prototypes and then mapping training bags into the embedding space that is constructed from all the instance prototypes. Thus, the MIL problem can be solved with the standard supervised learning techniques, such as support vector machines. Experiments show that the proposed algorithm is more efficient than its competitors and highly comparable with them in terms of classification accuracy. Moreover, the testing of noise sensitivity demonstrates that our MIL algorithm is very robust to labeling noise

  12. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  13. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  14. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  15. Dissimilarity-Based Multiple Instance Learning

    NARCIS (Netherlands)

    Cheplygina, V.

    2015-01-01

    Multiple instance learning (MIL) is an extension of supervised learning where the objects are represented by sets (bags) of feature vectors (instances) rather than individual feature vectors. For example, an image can be represented by a bag of instances, where each instance is a patch in that

  16. Locality in Generic Instance Search from One Example

    NARCIS (Netherlands)

    Tao, R.; Gavves, E.; Snoek, C.G.M.; Smeulders, A.W.M.

    2014-01-01

    This paper aims for generic instance search from a single example. Where the state-of-the-art relies on global image representation for the search, we proceed by including locality at all steps of the method. As the first novelty, we consider many boxes per database image as candidate targets to

  17. Multi-instance learning based on instance consistency for image retrieval

    Science.gov (United States)

    Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie

    2017-07-01

    Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.

  18. Fuzzy Clustering of Multiple Instance Data

    Science.gov (United States)

    2015-11-30

    12], [13]. In this paper, we focus on unsupervised learning for multiple instance data. Our approach, called Fuzzy Clustering of Multiple Instance...Transactions on Fu, vol. 17, no. 9, pp. 185–199, 2009. [5] Z. Xia, J. Peng, X. Feng, and J. Fan, “Multiple instance learning for automatic image ...Y. Chen, J. Z. Wang, and D. Geman, “ Image categorization by learning and reasoning with regions,” Journal of Machine Learning Research, vol. 5, pp

  19. Instance-Based Generative Biological Shape Modeling.

    Science.gov (United States)

    Peng, Tao; Wang, Wei; Rohde, Gustavo K; Murphy, Robert F

    2009-01-01

    Biological shape modeling is an essential task that is required for systems biology efforts to simulate complex cell behaviors. Statistical learning methods have been used to build generative shape models based on reconstructive shape parameters extracted from microscope image collections. However, such parametric modeling approaches are usually limited to simple shapes and easily-modeled parameter distributions. Moreover, to maximize the reconstruction accuracy, significant effort is required to design models for specific datasets or patterns. We have therefore developed an instance-based approach to model biological shapes within a shape space built upon diffeomorphic measurement. We also designed a recursive interpolation algorithm to probabilistically synthesize new shape instances using the shape space model and the original instances. The method is quite generalizable and therefore can be applied to most nuclear, cell and protein object shapes, in both 2D and 3D.

  20. Learning concept mappings from instance similarity

    NARCIS (Netherlands)

    Wang, S.; Englebienne, G.; Schlobach, S.

    2008-01-01

    Finding mappings between compatible ontologies is an important but difficult open problem. Instance-based methods for solving this problem have the advantage of focusing on the most active parts of the ontologies and reflect concept semantics as they are actually being used. However such methods

  1. Dissimilarity-based multiple instance learning

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Loog, Marco; Tax, David M. J.

    2010-01-01

    In this paper, we propose to solve multiple instance learning problems using a dissimilarity representation of the objects. Once the dissimilarity space has been constructed, the problem is turned into a standard supervised learning problem that can be solved with a general purpose supervised cla...... between distributions of within- and between set point distances, thereby taking relations within and between sets into account. Experiments on five publicly available data sets show competitive performance in terms of classification accuracy compared to previously published results....

  2. Basics of XBRL Instance for Financial Reporting

    Directory of Open Access Journals (Sweden)

    Mihaela Enachi

    2011-10-01

    Full Text Available The development of XBRL (eXtensible Business Reporting Language for financial reporting has significantly changed the way which financial statements are presented to differentusers and implicitly the quantity and quality of information provided through such a modern format. Following a standard structure, but adaptable to the regulations from different countriesor regions of the world, we can communicate and process financial accounting information more efficient and effective. This paper tries to clarify the manner of preparation and presentation ofthe financial statements if using XBRL as reporting tool.Keywords: XML, XBRL, financial reporting, specification, taxonomy, instance

  3. A critique of medicalisation: three instances.

    Science.gov (United States)

    Ryang, Sonia

    2017-12-01

    By briefly exploring three different examples where the existence of mental illness and developmental delay has been presumed, this paper sheds light on the way what Foucault calls the emergence of a regime of truth, i.e. where something that does not exist is made to exist through the construction of a system of truth around it. The first example concerns the direct marketing of pharmaceutical products to consumers in the US, the second the use of psychology in semi-post-Cold War Korea, and the third the persisting authority of psychology in the treatment of the developmentally delayed. While these instances are not innately connected, looking at these as part of the process by which the authoritative knowledge is established will help us understand, albeit partially, the mechanism by which mental illness penetrates our lives as truth, and how this regime of truth is supported by the authority of psychology, psychiatry and psychoanalysis, what Foucault calls the 'psy-function,' reinforcing the medicalisation of our lives.

  4. Some insights on hard quadratic assignment problem instances

    Science.gov (United States)

    Hussin, Mohamed Saifullah

    2017-11-01

    Since the formal introduction of metaheuristics, a huge number Quadratic Assignment Problem (QAP) instances have been introduced. Those instances however are loosely-structured, and therefore made it difficult to perform any systematic analysis. The QAPLIB for example, is a library that contains a huge number of QAP benchmark instances that consists of instances with different size and structure, but with a very limited availability for every instance type. This prevents researchers from performing organized study on those instances, such as parameter tuning and testing. In this paper, we will discuss several hard instances that have been introduced over the years, and algorithms that have been used for solving them.

  5. Rule Acquisition Design Strategy Variables: Degree of Instance Divergence, Sequence, and Instance Analysis.

    Science.gov (United States)

    Tennyson, Robert D.; Tennyson, Carol L.

    Three design strategies directly related to the development of instructional materials for rule learning were investigated. In the first of two experiments using both male and female tenth grade students, the degree of divergence between instances showed that contrasting irrelevant features resulted in better performance than matching irrelevant…

  6. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M. J.; Duin, Robert P. W.

    2013-01-01

    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass......In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL...... with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification...

  7. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.

    Science.gov (United States)

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.

  8. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems

    Science.gov (United States)

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems. PMID:26949383

  9. Equivalent instances of the simple plant location problem

    NARCIS (Netherlands)

    AlMohammad, Bader F.; Goldengorin, Boris; Ghosh, Diptesh; Sierksma, Gerard

    2000-01-01

    In this paper we deal with a pseudo-Boolean representation of the simple plant location problem. We define instances of this problem that are equivalent, in the sense that each feasible solution has the same goal function value in all such instances. We further define a collection of polytopes whose

  10. Multi-instance dictionary learning via multivariate performance measure optimization

    KAUST Repository

    Wang, Jim Jing-Yan

    2016-12-29

    The multi-instance dictionary plays a critical role in multi-instance data representation. Meanwhile, different multi-instance learning applications are evaluated by specific multivariate performance measures. For example, multi-instance ranking reports the precision and recall. It is not difficult to see that to obtain different optimal performance measures, different dictionaries are needed. This observation motives us to learn performance-optimal dictionaries for this problem. In this paper, we propose a novel joint framework for learning the multi-instance dictionary and the classifier to optimize a given multivariate performance measure, such as the F1 score and precision at rank k. We propose to represent the bags as bag-level features via the bag-instance similarity, and learn a classifier in the bag-level feature space to optimize the given performance measure. We propose to minimize the upper bound of a multivariate loss corresponding to the performance measure, the complexity of the classifier, and the complexity of the dictionary, simultaneously, with regard to both the dictionary and the classifier parameters. In this way, the dictionary learning is regularized by the performance optimization, and a performance-optimal dictionary is obtained. We develop an iterative algorithm to solve this minimization problem efficiently using a cutting-plane algorithm and a coordinate descent method. Experiments on multi-instance benchmark data sets show its advantage over both traditional multi-instance learning and performance optimization methods.

  11. Memory for Instances and Categories in Children and Adults

    Science.gov (United States)

    Tighe, Thomas J.; And Others

    1975-01-01

    Two studies of 7-year-olds and college students tested the hypothesis of a developmental difference in the degree to which subjects' memory performance was controlled by categorical properties vs. specific instance properties of test items. (GO)

  12. Multimodality and children's participation in classrooms: Instances of ...

    African Journals Online (AJOL)

    It outlines the theoretical framework supporting the pedagogical approach, that of multimodal social semiotics, known more widely as “multimodality”, and discusses three instances of children's multimodal practice, in Grades 1 and 2, 7 and 10 respectively. It sums up the role played by multimodality in participation, showing ...

  13. First observed instance of polygyny in Flammulated Owls

    Science.gov (United States)

    Brian D. Linkhart; Erin M. Evers; Julie D. Megler; Eric C. Palm; Catherine M. Salipante; Scott W. Yanco

    2008-01-01

    We document the first observed instance of polygyny in Flammulated Owls (Otus flammeolus) and the first among insectivorous raptors. Chronologies of the male's two nests, which were 510 m apart, were separated by nearly 2 weeks. Each brood initially consisted of three owlets, similar to the mean brood size in monogamous pairs. The male delivered...

  14. Time and activity sequence prediction of business process instances

    DEFF Research Database (Denmark)

    Polato, Mirko; Sperduti, Alessandro; Burattin, Andrea

    2018-01-01

    The ability to know in advance the trend of running process instances, with respect to different features, such as the expected completion time, would allow business managers to timely counteract to undesired situations, in order to prevent losses. Therefore, the ability to accurately predict...

  15. Instance-Wise Denoising Autoencoder for High Dimensional Data

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available Denoising Autoencoder (DAE is one of the most popular fashions that has reported significant success in recent neural network research. To be specific, DAE randomly corrupts some features of the data to zero as to utilize the cooccurrence information while avoiding overfitting. However, existing DAE approaches do not fare well on sparse and high dimensional data. In this paper, we present a Denoising Autoencoder labeled here as Instance-Wise Denoising Autoencoder (IDA, which is designed to work with high dimensional and sparse data by utilizing the instance-wise cooccurrence relation instead of the feature-wise one. IDA works ahead based on the following corruption rule: if an instance vector of nonzero feature is selected, it is forced to become a zero vector. To avoid serious information loss in the event that too many instances are discarded, an ensemble of multiple independent autoencoders built on different corrupted versions of the data is considered. Extensive experimental results on high dimensional and sparse text data show the superiority of IDA in efficiency and effectiveness. IDA is also experimented on the heterogenous transfer learning setting and cross-modal retrieval to study its generality on heterogeneous feature representation.

  16. Complexity results for restricted instances of a paint shop problem

    NARCIS (Netherlands)

    Bonsma, P.S.

    2003-01-01

    We study the following problem: an instance is a word with every letter occurring twice. A solution is a 2-coloring of the letters in the word such that the two occurrences of every letter are colored with different colors. The goal is to minimize the number of color changes between adjacent

  17. Learning Multi-Instance Deep Discriminative Patterns for Image Classification.

    Science.gov (United States)

    Tang, Peng; Wang, Xinggang; Feng, Bin; Liu, Wenyu

    2017-07-01

    Finding an effective and efficient representation is very important for image classification. The most common approach is to extract a set of local descriptors, and then aggregate them into a high-dimensional, more semantic feature vector, like unsupervised bag-of-features and weakly supervised part-based models. The latter one is usually more discriminative than the former due to the use of information from image labels. In this paper, we propose a weakly supervised strategy that using multi-instance learning (MIL) to learn discriminative patterns for image representation. Specially, we extend traditional multi-instance methods to explicitly learn more than one patterns in positive class, and find the "most positive" instance for each pattern. Furthermore, as the positiveness of instance is treated as a continuous variable, we can use stochastic gradient decent to maximize the margin between different patterns meanwhile considering MIL constraints. To make the learned patterns more discriminative, local descriptors extracted by deep convolutional neural networks are chosen instead of hand-crafted descriptors. Some experimental results are reported on several widely used benchmarks (Action 40, Caltech 101, Scene 15, MIT-indoor, SUN 397), showing that our method can achieve very remarkable performance.

  18. TNO at TRECVID 2013: Multimedia Event Detection and Instance Search

    NARCIS (Netherlands)

    Bouma, H.; Azzopardi, G.; Spitters, M.M.; Wit, J.J. de; Versloot, C.A.; Zon, R.W.L. van der; Eendebak, P.T.; Baan, J.; Hove, R.J.M. ten; Eekeren, A.W.M. van; Haar, F.B. ter; Hollander, R.J.M. den; Huis, R.J. van; Boer, M.H.T. de; Antwerpen, G. van; Broekhuijsen, B.J.; Daniele, L.M.; Brandt, P.; Schavemaker, J.G.M.; Kraaij, W.; Schutte, K.

    2013-01-01

    We describe the TNO system and the evaluation results for TRECVID 2013 Multimedia Event Detection (MED) and instance search (INS) tasks. The MED system consists of a bag-of-word (BOW) approach with spatial tiling that uses low-level static and dynamic visual features, an audio feature and high-level

  19. First instance competence of the Higher Administrative Court

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    (1) An interlocutory judgement can determine the admissibility of a legal action, also with regard to single procedural prerequisites (following BVerwG decision 14, 273). (2) The first instance competence for disputes about the dismantling of a decommissioned nuclear installation lies with the administrative courts and not with the higher administrative courts. Federal Administrative Court, decision of May 19, 1988 - 7 C 43.88 - (VGH Munich). (orig.) [de

  20. Domain Adaptation for Machine Translation with Instance Selection

    Directory of Open Access Journals (Sweden)

    Biçici Ergun

    2015-04-01

    Full Text Available Domain adaptation for machine translation (MT can be achieved by selecting training instances close to the test set from a larger set of instances. We consider 7 different domain adaptation strategies and answer 7 research questions, which give us a recipe for domain adaptation in MT. We perform English to German statistical MT (SMT experiments in a setting where test and training sentences can come from different corpora and one of our goals is to learn the parameters of the sampling process. Domain adaptation with training instance selection can obtain 22% increase in target 2-gram recall and can gain up to 3:55 BLEU points compared with random selection. Domain adaptation with feature decay algorithm (FDA not only achieves the highest target 2-gram recall and BLEU performance but also perfectly learns the test sample distribution parameter with correlation 0:99. Moses SMT systems built with FDA selected 10K training sentences is able to obtain F1 results as good as the baselines that use up to 2M sentences. Moses SMT systems built with FDA selected 50K training sentences is able to obtain F1 point better results than the baselines.

  1. Learning Bayesian networks from survival data using weighting censored instances.

    Science.gov (United States)

    Stajduhar, Ivan; Dalbelo-Basić, Bojana

    2010-08-01

    Different survival data pre-processing procedures and adaptations of existing machine-learning techniques have been successfully applied to numerous fields in clinical medicine. Zupan et al. (2000) proposed handling censored survival data by assigning distributions of outcomes to shortly observed censored instances. In this paper, we applied their learning technique to two well-known procedures for learning Bayesian networks: a search-and-score hill-climbing algorithm and a constraint-based conditional independence algorithm. The method was thoroughly tested in a simulation study and on the publicly available clinical dataset GBSG2. We compared it to learning Bayesian networks by treating censored instances as event-free and to Cox regression. The results on model performance suggest that the weighting approach performs best when dealing with intermediate censoring. There is no significant difference between the model structures learnt using either the weighting approach or by treating censored instances as event-free, regardless of censoring. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Instance-based concept learning from multiclass DNA microarray data

    Directory of Open Access Journals (Sweden)

    Dubitzky Werner

    2006-02-01

    Full Text Available Abstract Background Various statistical and machine learning methods have been successfully applied to the classification of DNA microarray data. Simple instance-based classifiers such as nearest neighbor (NN approaches perform remarkably well in comparison to more complex models, and are currently experiencing a renaissance in the analysis of data sets from biology and biotechnology. While binary classification of microarray data has been extensively investigated, studies involving multiclass data are rare. The question remains open whether there exists a significant difference in performance between NN approaches and more complex multiclass methods. Comparative studies in this field commonly assess different models based on their classification accuracy only; however, this approach lacks the rigor needed to draw reliable conclusions and is inadequate for testing the null hypothesis of equal performance. Comparing novel classification models to existing approaches requires focusing on the significance of differences in performance. Results We investigated the performance of instance-based classifiers, including a NN classifier able to assign a degree of class membership to each sample. This model alleviates a major problem of conventional instance-based learners, namely the lack of confidence values for predictions. The model translates the distances to the nearest neighbors into 'confidence scores'; the higher the confidence score, the closer is the considered instance to a pre-defined class. We applied the models to three real gene expression data sets and compared them with state-of-the-art methods for classifying microarray data of multiple classes, assessing performance using a statistical significance test that took into account the data resampling strategy. Simple NN classifiers performed as well as, or significantly better than, their more intricate competitors. Conclusion Given its highly intuitive underlying principles – simplicity

  3. Retrieving clinically relevant diabetic retinopathy images using a multi-class multiple-instance framework

    Science.gov (United States)

    Chandakkar, Parag S.; Venkatesan, Ragav; Li, Baoxin

    2013-02-01

    Diabetic retinopathy (DR) is a vision-threatening complication from diabetes mellitus, a medical condition that is rising globally. Unfortunately, many patients are unaware of this complication because of absence of symptoms. Regular screening of DR is necessary to detect the condition for timely treatment. Content-based image retrieval, using archived and diagnosed fundus (retinal) camera DR images can improve screening efficiency of DR. This content-based image retrieval study focuses on two DR clinical findings, microaneurysm and neovascularization, which are clinical signs of non-proliferative and proliferative diabetic retinopathy. The authors propose a multi-class multiple-instance image retrieval framework which deploys a modified color correlogram and statistics of steerable Gaussian Filter responses, for retrieving clinically relevant images from a database of DR fundus image database.

  4. Relational Database and Retrieval

    African Journals Online (AJOL)

    Computer Aided Design for Soil Classification. Relational Database and Retrieval. Techniques ... also presents algorithms showing the procedure for generating various soil classifications, retrieval techniques for ... In engineering discipline, for instance, design choices are a compromise,'shaped by many competing factors.

  5. Le contentieux camerounais devant les instances sportives internationales

    Directory of Open Access Journals (Sweden)

    Dikoume François Claude

    2016-01-01

    Les acteurs du sport depuis lors, utilisent donc les voies de recours au niveau international soit vers les fédérations sportives internationales ou encore et surtout vers le TAS qui s'occupe des litiges de toutes les disciplines sportives. Il est question ici de faire un inventaire casuistique descriptif non exhaustif des requêtes contentieuses camerounaises portées devant les diverses instances sportives internationales ; ceci permettra de questionner l'esprit processuel et la qualité technique de leurs réclamations juridiques en matière sportive.

  6. Multiple-Instance Learning for Medical Image and Video Analysis.

    Science.gov (United States)

    Quellec, Gwenole; Cazuguel, Guy; Cochener, Beatrice; Lamard, Mathieu

    2017-01-01

    Multiple-instance learning (MIL) is a recent machine-learning paradigm that is particularly well suited to medical image and video analysis (MIVA) tasks. Based solely on class labels assigned globally to images or videos, MIL algorithms learn to detect relevant patterns locally in images or videos. These patterns are then used for classification at a global level. Because supervision relies on global labels, manual segmentations are not needed to train MIL algorithms, unlike traditional single-instance learning (SIL) algorithms. Consequently, these solutions are attracting increasing interest from the MIVA community: since the term was coined by Dietterich et al. in 1997, 73 research papers about MIL have been published in the MIVA literature. This paper reviews the existing strategies for modeling MIVA tasks as MIL problems, recommends general-purpose MIL algorithms for each type of MIVA tasks, and discusses MIVA-specific MIL algorithms. Various experiments performed in medical image and video datasets are compiled in order to back up these discussions. This meta-analysis shows that, besides being more convenient than SIL solutions, MIL algorithms are also more accurate in many cases. In other words, MIL is the ideal solution for many MIVA tasks. Recent trends are discussed, and future directions are proposed for this emerging paradigm.

  7. Classifying and segmenting microscopy images with deep multiple instance learning.

    Science.gov (United States)

    Kraus, Oren Z; Ba, Jimmy Lei; Frey, Brendan J

    2016-06-15

    High-content screening (HCS) technologies have enabled large scale imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Recently, deep learning approaches that learn feature representations directly from pixel intensity values have dominated object recognition challenges. These tasks typically have a single centered object per image and existing models are not directly applicable to microscopy datasets. Here we develop an approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image level annotations. We introduce a new neural network architecture that uses MIL to simultaneously classify and segment microscopy images with populations of cells. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. To facilitate aggregating across large numbers of instances in CNN feature maps we present the Noisy-AND pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using whole microscopy images with image level labels. We show that training end-to-end MIL CNNs outperforms several previous methods on both mammalian and yeast datasets without requiring any segmentation steps. Torch7 implementation available upon request. oren.kraus@mail.utoronto.ca. © The Author 2016. Published by Oxford University Press.

  8. Breaking Newton’s third law: electromagnetic instances

    Science.gov (United States)

    Kneubil, Fabiana B.

    2016-11-01

    In this work, three instances are discussed within electromagnetism which highlight failures in the validity of Newton’s third law, all of them related to moving charged particles. It is well known that electromagnetic theory paved the way for relativity and that it disclosed new phenomena which were not compatible with the laws of mechanics. However, even if widely known in its generality, this issue is not clearly approached in introductory textbooks and it is difficult for students to perceive by themselves. Three explicit concrete situations involving the breaking of Newton’s third law are presented in this paper, together with a didactical procedure to construct graphically the configurations of electric field lines, which allow pictures produced by interactive radiation simulators available in websites to be better understood.

  9. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  10. Breaking Newton’s third law: electromagnetic instances

    International Nuclear Information System (INIS)

    Kneubil, Fabiana B

    2016-01-01

    In this work, three instances are discussed within electromagnetism which highlight failures in the validity of Newton’s third law, all of them related to moving charged particles. It is well known that electromagnetic theory paved the way for relativity and that it disclosed new phenomena which were not compatible with the laws of mechanics. However, even if widely known in its generality, this issue is not clearly approached in introductory textbooks and it is difficult for students to perceive by themselves. Three explicit concrete situations involving the breaking of Newton’s third law are presented in this paper, together with a didactical procedure to construct graphically the configurations of electric field lines, which allow pictures produced by interactive radiation simulators available in websites to be better understood. (paper)

  11. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  12. Automatic analysis of online image data for law enforcement agencies by concept detection and instance search

    Science.gov (United States)

    de Boer, Maaike H. T.; Bouma, Henri; Kruithof, Maarten C.; ter Haar, Frank B.; Fischer, Noëlle M.; Hagendoorn, Laurens K.; Joosten, Bart; Raaijmakers, Stephan

    2017-10-01

    The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.

  13. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  14. Multiple-Instance Learning for Anomaly Detection in Digital Mammography.

    Science.gov (United States)

    Quellec, Gwenole; Lamard, Mathieu; Cozic, Michel; Coatrieux, Gouenou; Cazuguel, Guy

    2016-07-01

    This paper describes a computer-aided detection and diagnosis system for breast cancer, the most common form of cancer among women, using mammography. The system relies on the Multiple-Instance Learning (MIL) paradigm, which has proven useful for medical decision support in previous works from our team. In the proposed framework, breasts are first partitioned adaptively into regions. Then, features derived from the detection of lesions (masses and microcalcifications) as well as textural features, are extracted from each region and combined in order to classify mammography examinations as "normal" or "abnormal". Whenever an abnormal examination record is detected, the regions that induced that automated diagnosis can be highlighted. Two strategies are evaluated to define this anomaly detector. In a first scenario, manual segmentations of lesions are used to train an SVM that assigns an anomaly index to each region; local anomaly indices are then combined into a global anomaly index. In a second scenario, the local and global anomaly detectors are trained simultaneously, without manual segmentations, using various MIL algorithms (DD, APR, mi-SVM, MI-SVM and MILBoost). Experiments on the DDSM dataset show that the second approach, which is only weakly-supervised, surprisingly outperforms the first approach, even though it is strongly-supervised. This suggests that anomaly detectors can be advantageously trained on large medical image archives, without the need for manual segmentation.

  15. A multiple-instance learning framework for diabetic retinopathy screening.

    Science.gov (United States)

    Quellec, Gwénolé; Lamard, Mathieu; Abràmoff, Michael D; Decencière, Etienne; Lay, Bruno; Erginay, Ali; Cochener, Béatrice; Cazuguel, Guy

    2012-08-01

    A novel multiple-instance learning framework, for automated image classification, is presented in this paper. Given reference images marked by clinicians as relevant or irrelevant, the image classifier is trained to detect patterns, of arbitrary size, that only appear in relevant images. After training, similar patterns are sought in new images in order to classify them as either relevant or irrelevant images. Therefore, no manual segmentations are required. As a consequence, large image datasets are available for training. The proposed framework was applied to diabetic retinopathy screening in 2-D retinal image datasets: Messidor (1200 images) and e-ophtha, a dataset of 25,702 examination records from the Ophdiat screening network (107,799 images). In this application, an image (or an examination record) is relevant if the patient should be referred to an ophthalmologist. Trained on one half of Messidor, the classifier achieved high performance on the other half of Messidor (A(z)=0.881) and on e-ophtha (A(z)=0.761). We observed, in a subset of 273 manually segmented images from e-ophtha, that all eight types of diabetic retinopathy lesions are detected. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Multiple-instance learning for breast cancer detection in mammograms.

    Science.gov (United States)

    Sánchez de la Rosa, Rubén; Lamard, Mathieu; Cazuguel, Guy; Coatrieux, Gouenou; Cozic, Michel; Quellec, Gwenolé

    2015-01-01

    This paper describes an experimental computer-aided detection and diagnosis system for breast cancer, the most common form of cancer among women, using mammography. The system relies on the Multiple-Instance Learning (MIL) paradigm, which has proven useful for medical decision support in previous works from our team. In the proposed framework, the breasts are first partitioned adaptively into regions. Then, either textural features, or features derived from the detection of masses and microcalcifications, are extracted from each region. Finally, feature vectors extracted from each region are combined using an MIL algorithm (Citation k-NN or mi-Graph), in order to recognize "normal" mammography examinations or to categorize examinations as "normal", "benign" or "cancer". An accuracy of 91.1% (respectively 62.1%) was achieved for normality recognition (respectively three-class categorization) in a subset of 720 mammograms from the DDSM dataset. The paper also discusses future improvements, that will make the most of the MIL paradigm, in order to improve "benign" versus "cancer" discrimination in particular.

  17. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  18. Multiple-instance learning for computer-aided detection of tuberculosis

    Science.gov (United States)

    Melendez, J.; Sánchez, C. I.; Philipsen, R. H. H. M.; Maduskar, P.; van Ginneken, B.

    2014-03-01

    Detection of tuberculosis (TB) on chest radiographs (CXRs) is a hard problem. Therefore, to help radiologists or even take their place when they are not available, computer-aided detection (CAD) systems are being developed. In order to reach a performance comparable to that of human experts, the pattern recognition algorithms of these systems are typically trained on large CXR databases that have been manually annotated to indicate the abnormal lung regions. However, manually outlining those regions constitutes a time-consuming process that, besides, is prone to inconsistencies and errors introduced by interobserver variability and the absence of an external reference standard. In this paper, we investigate an alternative pattern classi cation method, namely multiple-instance learning (MIL), that does not require such detailed information for a CAD system to be trained. We have applied this alternative approach to a CAD system aimed at detecting textural lesions associated with TB. Only the case (or image) condition (normal or abnormal) was provided in the training stage. We compared the resulting performance with those achieved by several variations of a conventional system trained with detailed annotations. A database of 917 CXRs was constructed for experimentation. It was divided into two roughly equal parts that were used as training and test sets. The area under the receiver operating characteristic curve was utilized as a performance measure. Our experiments show that, by applying the investigated MIL approach, comparable results as with the aforementioned conventional systems are obtained in most cases, without requiring condition information at the lesion level.

  19. Instances of Use of United States Armed Forces Abroad, 1798-2007

    National Research Council Canada - National Science Library

    Grimmett, Richard F

    2007-01-01

    .... Most of these post-1980 instances are summaries based on Presidential reports to Congress related to the War Powers Resolution. A comprehensive commentary regarding any of the instances listed is not undertaken here.

  20. Random Shortest Paths: Non-Euclidean Instances for Metric Optimization Problems

    NARCIS (Netherlands)

    Bringmann, Karl; Engels, Christian; Manthey, Bodo; Rao, B.V. Raghavendra

    Probabilistic analysis for metric optimization problems has mostly been conducted on random Euclidean instances, but little is known about metric instances drawn from distributions other than the Euclidean. This motivates our study of random metric instances for optimization problems obtained as

  1. Random shortest paths: non-euclidean instances for metric optimization problems

    NARCIS (Netherlands)

    Bringmann, Karl; Engels, Christian; Manthey, Bodo; Rao, B.V. Raghavendra; Chatterjee, K.; Sgall, J.

    2013-01-01

    Probabilistic analysis for metric optimization problems has mostly been conducted on random Euclidean instances, but little is known about metric instances drawn from distributions other than the Euclidean. This motivates our study of random metric instances for optimization problems obtained as

  2. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  3. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  4. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  5. Vehicle and crew scheduling: Solving large real-world instances with an integrated approach

    NARCIS (Netherlands)

    S.W. de Groot (Sebastiaan); D. Huisman (Dennis)

    2008-01-01

    textabstractIn this paper we discuss several methods to solve large real-world instances of the vehicle and crew scheduling problem. Although there has been an increased attention to integrated approaches for solving such problems in the literature, currently only small or medium-sized instances can

  6. Vehicle and crew scheduling: solving large real-world instances with an integrated approach

    NARCIS (Netherlands)

    S.W. de Groot (Sebastiaan); D. Huisman (Dennis)

    2004-01-01

    textabstractIn this paper we discuss several methods to solve large real-world instances of the vehicle and crew scheduling problem. Although, there has been an increased attention to integrated approaches for solving such problems in the literature, currently only small or medium-sized instances

  7. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  8. THE EXECUTION INSTANCE OF THE JUDICIAL JUDGEMENTS SENTENCED IN THE LITIGATIONS OF ADMINISTRATIVE CONTENTIOUS

    Directory of Open Access Journals (Sweden)

    ADRIANA ELENA BELU

    2012-05-01

    Full Text Available The instance which solved the fund of the litigation rising from an administrative contract differs depending on the material competence sanctioned by law, in contrast to the subject of the commercial law where the execution instance is the court. In this matter the High Court stated in a decision1 that in a first case the competence of solving the legal contest against the proper forced execution and of the legal contest that has in view the explanation of the meaning of spreading and applying the enforceable title which does not proceed from a jurisdiction organ is in the authority of the court. The Law of the Administrative Contentious no 554/2004 defines in Article 2 paragraph 1 letter t the notion of execution instance, providing that this is the instance which solved the fund of the litigation of administrative contentious, so even in the case of the administrative contracts the execution instance is the one which solved the litigation rising from the contract. Corroborating this disposal with the ones existing in articles 22 and 25 in the Law, it can be shown that no matter the instance which decision is an enforceable title, the execution of the law will be done by the instance which solved the fund of the litigation regarding the administrative contentious.

  9. Database Replication

    Directory of Open Access Journals (Sweden)

    Marius Cristian MAZILU

    2010-12-01

    Full Text Available For someone who has worked in an environment in which the same database is used for data entry and reporting, or perhaps managed a single database server that was utilized by too many users, the advantages brought by data replication are clear. The main purpose of this paper is to emphasize those advantages as well as presenting the different types of Database Replication and the cases in which their use is recommended.

  10. Weakly Supervised Object Localization with Multi-Fold Multiple Instance Learning.

    Science.gov (United States)

    Cinbis, Ramazan Gokberk; Verbeek, Jakob; Schmid, Cordelia

    2017-01-01

    Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence/presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach.

  11. Increasing the detection of minority class instances in financial statement fraud

    CSIR Research Space (South Africa)

    Moepya, Stephen O

    2017-04-01

    Full Text Available Financial statement fraud has proven to be difficult to detect without the assistance of data analytical procedures. In the fraud detection domain, minority class instances cannot be readily found using standard machine learning algorithms. Moreover...

  12. Variety Preserved Instance Weighting and Prototype Selection for Probabilistic Multiple Scope Simulations

    Science.gov (United States)

    2017-05-30

    This clearly shows that Y almost uniformly includes variety of samples over the entire range of x~ , and the rare prototypes in the range [4,5...AFRL-AFOSR-JP-TR-2017-0042 Variety Preserved Instance Weighting and Prototype Selection for Probabilistic Multiple Scope Simulations Takashi Washio...30-05-2017 2. REPORT TYPE Final 3. DATES COVERED (From - To) 22 Apr 2015 to 21 Apr 2017 4. TITLE AND SUBTITLE Variety Preserved Instance

  13. Coarse-Grain QoS-Aware Dynamic Instance Provisioning for Interactive Workload in the Cloud

    Directory of Open Access Journals (Sweden)

    Jianxiong Wan

    2014-01-01

    Full Text Available Cloud computing paradigm renders the Internet service providers (ISPs with a new approach to deliver their service with less cost. ISPs can rent virtual machines from the Infrastructure-as-a-Service (IaaS provided by the cloud rather than purchasing them. In addition, commercial cloud providers (CPs offer diverse VM instance rental services in various time granularities, which provide another opportunity for ISPs to reduce cost. We investigate a Coarse-grain QoS-aware Dynamic Instance Provisioning (CDIP problem for interactive workload in the cloud from the perspective of ISPs. We formulate the CDIP problem as an optimization problem where the objective is to minimize the VM instance rental cost and the constraint is the percentile delay bound. Since the Internet traffic shows a strong self-similar property, it is hard to get an analytical form of the percentile delay constraint. To address this issue, we purpose a lookup table structure together with a learning algorithm to estimate the performance of the instance provisioning policy. This approach is further extended with two function approximations to enhance the scalability of the learning algorithm. We also present an efficient dynamic instance provisioning algorithm, which takes full advantage of the rental service diversity, to determine the instance rental policy. Extensive simulations are conducted to validate the effectiveness of the proposed algorithms.

  14. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  15. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  16. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  17. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  18. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  19. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  20. The RHIC transfer line cable database

    International Nuclear Information System (INIS)

    Scholl, E.H.; Satogata, T.

    1995-01-01

    A cable database was created to facilitate and document installation of cables and wiring in the RHIC project, as well as to provide a data source to track possible wiring and signal problems. The eight tables of this relational database, currently implemented in Sybase, contain information ranging from cable routing to attenuation of individual wires. This database was created in a hierarchical scheme under the assumption that cables contain wires -- each instance of a cable has one to many wires associated with it. This scheme allows entry of information pertinent to individual wires while only requiring single entries for each cable. Relationships to other RHIC databases are also discussed

  1. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  2. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  3. Learning from non-representative instances: Children's sample and population predictions.

    Science.gov (United States)

    Thevenow-Harrison, Jordan T; Kalish, Charles W

    2016-12-01

    What do children learn from biased samples? Most samples people encounter are biased in some way, and responses to bias can distinguish among different theories of inductive inference. A sample of 67 4- to 8-year-old children learned to make conditional predictions about a set of sample items. They then made predictions about the properties of new instances or old instances from the training set. The experiment compared unbiased and biased sampling. Given unbiased samples, participants used what they learned to make predictions about population and sample instances. With biased samples, children were less accurate/confident about inferences about the population than about the sample. Children used information in a biased sample to make predictions about items in that sample, but they were less likely to generalize to new items than when samples were unbiased. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm.

    Science.gov (United States)

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes.

  5. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  6. Solubility Database

    Science.gov (United States)

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  7. Optimization instances for deterministic and stochastic problems on energy efficient investments planning at the building level.

    Science.gov (United States)

    Cano, Emilio L; Moguerza, Javier M; Alonso-Ayuso, Antonio

    2015-12-01

    Optimization instances relate to the input and output data stemming from optimization problems in general. Typically, an optimization problem consists of an objective function to be optimized (either minimized or maximized) and a set of constraints. Thus, objective and constraints are jointly a set of equations in the optimization model. Such equations are a combination of decision variables and known parameters, which are usually related to a set domain. When this combination is a linear combination, we are facing a classical Linear Programming (LP) problem. An optimization instance is related to an optimization model. We refer to that model as the Symbolic Model Specification (SMS) containing all the sets, variables, and parameters symbols and relations. Thus, a whole instance is composed by the SMS, the elements in each set, the data values for all the parameters, and, eventually, the optimal decisions resulting from the optimization solution. This data article contains several optimization instances from a real-world optimization problem relating to investment planning on energy efficient technologies at the building level.

  8. Complexity results on restricted instances of a paint shop problem for words

    NARCIS (Netherlands)

    Bonsma, P.S.; Epping, Th.; Hochstättler, W.

    2006-01-01

    We study the following problem: an instance is a word with every letter occurring twice. A solution is a 2-coloring of its letters such that the two occurrences of every letter are colored with different colors. The goal is to minimize the number of color changes between adjacent letters. This is a

  9. Optimization of the K-means algorithm for the solution of high dimensional instances

    Science.gov (United States)

    Pérez, Joaquín; Pazos, Rodolfo; Olivares, Víctor; Hidalgo, Miguel; Ruiz, Jorge; Martínez, Alicia; Almanza, Nelva; González, Moisés

    2016-06-01

    This paper addresses the problem of clustering instances with a high number of dimensions. In particular, a new heuristic for reducing the complexity of the K-means algorithm is proposed. Traditionally, there are two approaches that deal with the clustering of instances with high dimensionality. The first executes a preprocessing step to remove those attributes of limited importance. The second, called divide and conquer, creates subsets that are clustered separately and later their results are integrated through post-processing. In contrast, this paper proposes a new solution which consists of the reduction of distance calculations from the objects to the centroids at the classification step. This heuristic is derived from the visual observation of the clustering process of K-means, in which it was found that the objects can only migrate to adjacent clusters without crossing distant clusters. Therefore, this heuristic can significantly reduce the number of distance calculations from an object to the centroids of the potential clusters that it may be classified to. To validate the proposed heuristic, it was designed a set of experiments with synthetic and high dimensional instances. One of the most notable results was obtained for an instance of 25,000 objects and 200 dimensions, where its execution time was reduced up to 96.5% and the quality of the solution decreased by only 0.24% when compared to the K-means algorithm.

  10. Complexity evaluation of benchmark instances for the p-median problem

    NARCIS (Netherlands)

    Goldengorin, B.; Krushinsky, D.

    The paper is aimed at experimental evaluation of the complexity of the p-Median problem instances, defined by m x n costs matrices, from several of the most widely used libraries. The complexity is considered in terms of possible problem size reduction and preprocessing, irrespective of the solution

  11. Optimal Phonemic Environmental Range to Select VCV Instances for VCV Speech Synthesis by Rule

    Science.gov (United States)

    Shimizu, Tadaaki; Kimoto, Masaya; Yoshimura, Hiroki; Namiki, Toshie; Isu, Naoki; Sugata, Kazuhiro

    We proposed two selection methods, 1) selection method by using phonemic environmental resemblance score (PER method), and 2) selection method by searching minimal connective distortion path (MLD method), for small scale speech synthesis system. PER method requires phonemic environmental information for each VCV instance in a VCV unit dictionary. This paper investigated experimentally to what extent we can reduce the phonemic environmental information with keeping high quality of synthesized speech. We verified that two phonemes frontward and one phoneme rearward range to a current VCV instance is enough to synthesize similar quality of speech as five phonemes frontward and five phonemes rearward. This result gives an experimental basis on minimizing a size of VCV unit dictionary.

  12. FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

    OpenAIRE

    Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu

    2017-01-01

    Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...

  13. A practical approximation algorithm for solving massive instances of hybridization number for binary and nonbinary trees

    Science.gov (United States)

    2014-01-01

    Background Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Results Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Conclusions Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work (SIDMA 26(4):1635-1656, TCBB 10(1):18-25, SIDMA 28(1):49-66) and are publicly available. We also apply our methods to real data. PMID:24884964

  14. Instance selection in digital soil mapping: a study case in Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Elvio Giasson

    2015-09-01

    Full Text Available A critical issue in digital soil mapping (DSM is the selection of data sampling method for model training. One emerging approach applies instance selection to reduce the size of the dataset by drawing only relevant samples in order to obtain a representative subset that is still large enough to preserve relevant information, but small enough to be easily handled by learning algorithms. Although there are suggestions to distribute data sampling as a function of the soil map unit (MU boundaries location, there are still contradictions among research recommendations for locating samples either closer or more distant from soil MU boundaries. A study was conducted to evaluate instance selection methods based on spatially-explicit data collection using location in relation to soil MU boundaries as the main criterion. Decision tree analysis was performed for modeling digital soil class mapping using two different sampling schemes: a selecting sampling points located outside buffers near soil MU boundaries, and b selecting sampling points located within buffers near soil MU boundaries. Data was prepared for generating classification trees to include only data points located within or outside buffers with widths of 60, 120, 240, 360, 480, and 600m near MU boundaries. Instance selection methods using both spatial selection of methods was effective for reduced size of the dataset used for calibrating classification tree models, but failed to provide advantages to digital soil mapping because of potential reduction in the accuracy of classification tree models.

  15. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  16. A novel online Variance Based Instance Selection (VBIS) method for efficient atypicality detection in chest radiographs

    Science.gov (United States)

    Alzubaidi, Mohammad; Balasubramanian, Vineeth; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.

    2012-02-01

    Chest radiographs are complex, heterogeneous medical images that depict many different types of tissues, and many different types of abnormalities. A radiologist develops a sense of what visual textures are typical for each anatomic region within chest radiographs by viewing a large set of "normal" radiographs over a period of years. As a result, an expert radiologist is able to readily detect atypical features. In our previous research, we modeled this type of learning by (1) collecting a large set of "normal" chest radiographs, (2) extracting local textural and contour features from anatomical regions within these radiographs, in the form of high-dimensional feature vectors, (3) using a distance-based transductive machine learning method to learn what it typical for each anatomical region, and (4) computing atypicality scores for the anatomical regions in test radiographs. That research demonstrated that the transductive One-Nearest-Neighbor (1NN) method was effective for identifying atypical regions in chest radiographs. However, the large set of training instances (and the need to compute a distance to each of these instances in a high dimensional space) made the transductive method computationally expensive. This paper discusses a novel online Variance Based Instance Selection (VBIS) method for use with the Nearest Neighbor classifier, that (1) substantially reduced the computational cost of the transductive 1NN method, while maintaining a high level of effectiveness in identifying regions of chest radiographs with atypical content, and (2) allowed the incremental incorporation of training data from new informative chest radiographs as they are encountered in day-to-day clinical work.

  17. On the instance of misuse of unprofitable energy prices under cartel law

    International Nuclear Information System (INIS)

    Schoening, M.

    1993-01-01

    The practice of fixing prices which do not cover the costs can on principle not be considered an instance of misuse pursuant to Articles 22 Section 4 Clause 2 No. 2, 103 Section 5 Clause 2 No. 2 of the GWB (cartel laws). If the authority for the supervision of cartels takes action against companies operating with unprofitable prices, this constitutes a violation not only of cartel law, but also of the constitution. The cartel authorities have no right to dismiss a dominating company's referral to poor business prospects on the ground that its business report is theoretically manipulable. Rather, the burden of proof of concealment is on the authorities. (orig.) [de

  18. Spatiotemporal Variation of Particulate Fallout Instances in Sfax City, Southern Tunisia: Influence of Sources and Meteorology

    OpenAIRE

    Bahloul, Moez; Chabbi, Iness; Sdiri, Ali; Amdouni, Ridha; Medhioub, Khaled; Azri, Chafai

    2015-01-01

    Particles deposition in the main industrial zone of Sfax City (southern Tunisia) was studied by weekly monitoring particulate fallout instances at twenty sites from November 11, 2012, to April 15, 2013. Very high fluctuation in those particle fluxes, ranging from 0.376 to 9.915 g/m2, was clearly observed. Spatiotemporal distribution of the deposited particulate fluxes and the exposure of each site to the main industrial plumes (i.e., phosphate treatment plant “SIAPE,” soap industry “SIOS-ZITE...

  19. Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances

    Science.gov (United States)

    Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.

    2017-12-01

    THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its

  20. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  1. Flexible Execution of Distributed Business Processes based on Process Instance Migration

    Directory of Open Access Journals (Sweden)

    Sonja Zaplata

    2010-07-01

    Full Text Available Many advanced business applications, collaborations, and virtual organizations are based on distributed business process management. As, in such scenarios, competition, fluctuation and dynamism increase continuously, the distribution and execution of individual process instances should become as flexible as possible in order to allow for an ad-hoc adaptation to changing conditions at runtime. However, most current approaches address process distribution by a fragmentation of processes already at design time. Such a static configuration can be assigned to different process engines near runtime, but can hardly be changed dynamically because distribution logic is weaved into the business process itself.A more dynamic segmentation of such distribution can be achieved by process runtime migration even without modifying the business logic of the original process model. Therefore, this contribution presents a migration data meta-model for enhancing such existing processes with the ability for runtime migration. The approach permits the inclusion of intensions and privacy requirements of both process modelers and initiators and supports execution strategies for sequential and parallel execution of processes. The contribution concludes with presenting a conceptual evaluation in which runtime migration has been applied to XPDL and WS-BPEL process instances and, based on these results, a qualitative comparison of migration and fragmentation.

  2. Emancipation trough the Artistic Experience and the Meaning of Handicap as Instance of Otherness

    Directory of Open Access Journals (Sweden)

    Robi Kroflič

    2014-03-01

    Full Text Available The key hypothesis of the article is that successful inter-mediation of art to vulnerable groups of people (including children depends on the correct identification of the nature of an artistic act and on the meaning that handicap—as an instance of otherness—has in the life of artists and spectators. A just access to the artistic experience is basically not the question of the distribution of artistic production (since if artistic object is principally accessible to all people, it will not reach vulnerable groups of spectators, but of ensuring artistic creativity and presentation. This presupposes a spectator as a competent being who is able to interact with the artistic object without our interpretative explanation and who is sensible to the instance of otherness (handicap is merely a specific form of otherness. The theory of emancipation from J. Ranciere, the theory of recognition from A. Honneth, and the theory of narration from P. Ricoeur and R. Kearney, as well as our experiences with a comprehensive inductive approach and artistic experience as one of its basic educational methods offer us a theoretical framework for such a model of art inter-mediation.

  3. Entropy-Weighted Instance Matching Between Different Sourcing Points of Interest

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-01-01

    Full Text Available The crucial problem for integrating geospatial data is finding the corresponding objects (the counterpart from different sources. Most current studies focus on object matching with individual attributes such as spatial, name, or other attributes, which avoids the difficulty of integrating those attributes, but at the cost of an ineffective matching. In this study, we propose an approach for matching instances by integrating heterogeneous attributes with the allocation of suitable attribute weights via information entropy. First, a normalized similarity formula is developed, which can simplify the calculation of spatial attribute similarity. Second, sound-based and word segmentation-based methods are adopted to eliminate the semantic ambiguity when there is a lack of a normative coding standard in geospatial data to express the name attribute. Third, category mapping is established to address the heterogeneity among different classifications. Finally, to address the non-linear characteristic of attribute similarity, the weights of the attributes are calculated by the entropy of the attributes. Experiments demonstrate that the Entropy-Weighted Approach (EWA has good performance both in terms of precision and recall for instance matching from different data sets.

  4. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  5. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  6. Multi-Instance Learning Models for Automated Support of Analysts in Simulated Surveillance Environments

    Science.gov (United States)

    Birisan, Mihnea; Beling, Peter

    2011-01-01

    New generations of surveillance drones are being outfitted with numerous high definition cameras. The rapid proliferation of fielded sensors and supporting capacity for processing and displaying data will translate into ever more capable platforms, but with increased capability comes increased complexity and scale that may diminish the usefulness of such platforms to human operators. We investigate methods for alleviating strain on analysts by automatically retrieving content specific to their current task using a machine learning technique known as Multi-Instance Learning (MIL). We use MIL to create a real time model of the analysts' task and subsequently use the model to dynamically retrieve relevant content. This paper presents results from a pilot experiment in which a computer agent is assigned analyst tasks such as identifying caravanning vehicles in a simulated vehicle traffic environment. We compare agent performance between MIL aided trials and unaided trials.

  7. The Naueti relationship terminology: A new instance of asymmetric prescription from East Timor

    Directory of Open Access Journals (Sweden)

    David Hicks

    2008-12-01

    Full Text Available Relationship terminologies of an asymmetric prescriptive character are widespread throughout mainland and insular Southeast Asia. Their western limit is marked by the Kachin of northern Burma (Leach 1961:28-53 while their eastern limit is marked by the Mambai people of central East Timor (Figures 1 and 2. Between these limits are such other instances as occur, for example, among the Lamet of Cambodia (Needham 1960, the various Batak groups of Sumatra (Rodgers 1984, and the Rindi of eastern Sumba (Forth 1981. The principal intention of the present paper is to establish the existence of a new asymmetric prescriptive terminology in East Timor, and by doing so provide empirical justification for adjusting the easternmost limits of nomenclatures of this kind. A subsidiary intent is to offer a contribution to current speculations regarding the transformation of relationship terminologies in eastern Indonesia (Guermonprez 1998; Smedal 2002.

  8. An efficient abnormal cervical cell detection system based on multi-instance extreme learning machine

    Science.gov (United States)

    Zhao, Lili; Yin, Jianping; Yuan, Lihuan; Liu, Qiang; Li, Kuan; Qiu, Minghui

    2017-07-01

    Automatic detection of abnormal cells from cervical smear images is extremely demanded in annual diagnosis of women's cervical cancer. For this medical cell recognition problem, there are three different feature sections, namely cytology morphology, nuclear chromatin pathology and region intensity. The challenges of this problem come from feature combination s and classification accurately and efficiently. Thus, we propose an efficient abnormal cervical cell detection system based on multi-instance extreme learning machine (MI-ELM) to deal with above two questions in one unified framework. MI-ELM is one of the most promising supervised learning classifiers which can deal with several feature sections and realistic classification problems analytically. Experiment results over Herlev dataset demonstrate that the proposed method outperforms three traditional methods for two-class classification in terms of well accuracy and less time.

  9. Iatrogenic alterations in the biodistribution of radiotracers as a result of drug therapy: Reported instances

    International Nuclear Information System (INIS)

    Hladik, W.B. III; Ponto, J.A.; Lentle, B.C.; Laven, D.L.

    1987-01-01

    This chapter is a compilation of reported instances in which the biodistribution of a radiopharmaceutical has been (or could be) modified by the administration of a therapeutic nonradioactive drug or contrast agent in such a way as to potentially interfere with the interpretation of the nuclear medicine study in question. This type of phenomenon is commonly referred to as a drug-radiopharmaceutical interaction. In this chapter, interactions are arranged according to the radiopharmaceutical involved; each interaction is characterized by use of the following descriptors: 1. Interfering drug: the interfering nonradioactive drug that alters the kinetics of the radiopharmaceutical and thus changes the resulting diagnostic data obtained from the study. 2. Nuclear medicine study affected: the nuclear medicine study in which the interaction is likely to occur. 3. Effect on image: the appearance of the image (or the effect on diagnostic data) which results from the interaction. 4. Significance: the potential clinical significance of the interaction

  10. The Impact of Culture On Smart Community Technology: The Case of 13 Wikipedia Instances

    Directory of Open Access Journals (Sweden)

    Zinayida Petrushyna

    2014-11-01

    Full Text Available Smart communities provide technologies for monitoring social behaviors inside communities. The technologies that support knowledge building should consider the cultural background of community members. The studies of the influence of the culture on knowledge building is limited. Just a few works consider digital traces of individuals that they explain using cultural values and beliefs. In this work, we analyze 13 Wikipedia instances where users with different cultural background build knowledge in different ways. We compare edits of users. Using social network analysis we build and analyze co- authorship networks and watch the networks evolution. We explain the differences we have found using Hofstede dimensions and Schwartz cultural values and discuss implications for the design of smart community technologies. Our findings provide insights in requirements for technologies used for smart communities in different cultures.

  11. Automated detection of age-related macular degeneration in OCT images using multiple instance learning

    Science.gov (United States)

    Sun, Weiwei; Liu, Xiaoming; Yang, Zhou

    2017-07-01

    Age-related Macular Degeneration (AMD) is a kind of macular disease which mostly occurs in old people,and it may cause decreased vision or even lead to permanent blindness. Drusen is an important clinical indicator for AMD which can help doctor diagnose disease and decide the strategy of treatment. Optical Coherence Tomography (OCT) is widely used in the diagnosis of ophthalmic diseases, include AMD. In this paper, we propose a classification method based on Multiple Instance Learning (MIL) to detect AMD. Drusen can exist in a few slices of OCT images, and MIL is utilized in our method. We divided the method into two phases: training phase and testing phase. We train the initial features and clustered to create a codebook, and employ the trained classifier in the test set. Experiment results show that our method achieved high accuracy and effectiveness.

  12. Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations

    Energy Technology Data Exchange (ETDEWEB)

    Tamagnini, Paolo; Krause, Josua W.; Dasgupta, Aritra; Bertini, Enrico

    2017-05-14

    To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.

  13. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  14. Multi-instance multi-label learning for whole slide breast histopathology

    Science.gov (United States)

    Mercan, Caner; Mercan, Ezgi; Aksoy, Selim; Shapiro, Linda G.; Weaver, Donald L.; Elmore, Joann G.

    2016-03-01

    Digitization of full biopsy slides using the whole slide imaging technology has provided new opportunities for understanding the diagnostic process of pathologists and developing more accurate computer aided diagnosis systems. However, the whole slide images also provide two new challenges to image analysis algorithms. The first one is the need for simultaneous localization and classification of malignant areas in these large images, as different parts of the image may have different levels of diagnostic relevance. The second challenge is the uncertainty regarding the correspondence between the particular image areas and the diagnostic labels typically provided by the pathologists at the slide level. In this paper, we exploit a data set that consists of recorded actions of pathologists while they were interpreting whole slide images of breast biopsies to find solutions to these challenges. First, we extract candidate regions of interest (ROI) from the logs of pathologists' image screenings based on different actions corresponding to zoom events, panning motions, and fixations. Then, we model these ROIs using color and texture features. Next, we represent each slide as a bag of instances corresponding to the collection of candidate ROIs and a set of slide-level labels extracted from the forms that the pathologists filled out according to what they saw during their screenings. Finally, we build classifiers using five different multi-instance multi-label learning algorithms, and evaluate their performances under different learning and validation scenarios involving various combinations of data from three expert pathologists. Experiments that compared the slide-level predictions of the classifiers with the reference data showed average precision values up to 62% when the training and validation data came from the same individual pathologist's viewing logs, and an average precision of 64% was obtained when the candidate ROIs and the labels from all pathologists were

  15. Stochastic learning of multi-instance dictionary for earth mover’s distance-based histogram comparison

    KAUST Repository

    Fan, Jihong

    2016-09-17

    Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover’s distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD-based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stochastic learning framework, we have one triplet of bags, including one basic bag, one positive bag, and one negative bag. These bags are mapped to histograms using a multi-instance dictionary. We argue that the EMD between the basic histogram and the positive histogram should be smaller than that between the basic histogram and the negative histogram. Base on this condition, we design a hinge loss. By minimizing this hinge loss and some regularization terms of the dictionary, we update the dictionary instances. The experiments over multi-instance retrieval applications shows its effectiveness when compared to other dictionary learning methods over the problems of medical image retrieval and natural language relation classification. © 2016 The Natural Computing Applications Forum

  16. Development of a New Web Portal for the Database on Demand Service

    CERN Document Server

    Altinigne, Can Yilmaz

    2017-01-01

    The Database on Demand service allows members of CERN communities to provision and manage database instances of different flavours (MySQL, Oracle, PostgreSQL and InfluxDB). Users can create and edit these instances using the web interface of DB On Demand. This web front end is currently on Java technologies and the ZK web framework, for which is generally difficult to find experienced developers and which has gotten to lack behind more modern web stacks in capabilities and usability.

  17. Event recognition in personal photo collections via multiple instance learning-based classification of multiple images

    Science.gov (United States)

    Ahmad, Kashif; Conci, Nicola; Boato, Giulia; De Natale, Francesco G. B.

    2017-11-01

    Over the last few years, a rapid growth has been witnessed in the number of digital photos produced per year. This rapid process poses challenges in the organization and management of multimedia collections, and one viable solution consists of arranging the media on the basis of the underlying events. However, album-level annotation and the presence of irrelevant pictures in photo collections make event-based organization of personal photo albums a more challenging task. To tackle these challenges, in contrast to conventional approaches relying on supervised learning, we propose a pipeline for event recognition in personal photo collections relying on a multiple instance-learning (MIL) strategy. MIL is a modified form of supervised learning and fits well for such applications with weakly labeled data. The experimental evaluation of the proposed approach is carried out on two large-scale datasets including a self-collected and a benchmark dataset. On both, our approach significantly outperforms the existing state-of-the-art.

  18. How to Mine Information from Each Instance to Extract an Abbreviated and Credible Logical Rule

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2014-10-01

    Full Text Available Decision trees are particularly promising in symbolic representation and reasoning due to their comprehensible nature, which resembles the hierarchical process of human decision making. However, their drawbacks, caused by the single-tree structure,cannot be ignored. A rigid decision path may cause the majority class to overwhelm otherclass when dealing with imbalanced data sets, and pruning removes not only superfluousnodes, but also subtrees. The proposed learning algorithm, flexible hybrid decision forest(FHDF, mines information implicated in each instance to form logical rules on the basis of a chain rule of local mutual information, then forms different decision tree structures and decision forests later. The most credible decision path from the decision forest can be selected to make a prediction. Furthermore, functional dependencies (FDs, which are extracted from the whole data set based on association rule analysis, perform embedded attribute selection to remove nodes rather than subtrees, thus helping to achieve different levels of knowledge representation and improve model comprehension in the framework of semi-supervised learning. Naive Bayes replaces the leaf nodes at the bottom of the tree hierarchy, where the conditional independence assumption may hold. This technique reduces the potential for overfitting and overtraining and improves the prediction quality and generalization. Experimental results on UCI data sets demonstrate the efficacy of the proposed approach.

  19. Landmark-based deep multi-instance learning for brain disease diagnosis.

    Science.gov (United States)

    Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang

    2018-01-01

    In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Residual uncertainty estimation using instance-based learning with applications to hydrologic forecasting

    Science.gov (United States)

    Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.

    2017-08-01

    A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.

  1. Figures du temps historique et instances de la méthode

    Directory of Open Access Journals (Sweden)

    Silvia Caianiello

    2017-12-01

    Full Text Available Figures of historical time and instances of the method. The paper highlights the interaction between the representations of historical time and the epistemological approaches to history, with particular emphasis on the process of discretization of time. This outcome, precondition for the building of the modern historical consciousness, is pursued through different pathways, such as the history of the philological method, the Querelle des Anciens et des Moderns, and the acknowledgement of the secular transformation of the earth. Notwithstanding the paradigmatic relevance of the great philosophies of history by Herder and Hegel, it is argued that the shift to an open-ended representation of historical time was established only in the scientific historicism of Ranke and Droysen. Droysen’s pathbreaking epistemological innovations are compared to the major tenets of the Annales School in France and it is further argued that the epistemological endevour to restore the intelligibility of history in a global, albeit pluralized, perspective brings Braudel close to the systemic approaches of the biology of his time.

  2. Decision Trees for Continuous Data and Conditional Mutual Information as a Criterion for Splitting Instances.

    Science.gov (United States)

    Drakakis, Georgios; Moledina, Saadiq; Chomenidis, Charalampos; Doganis, Philip; Sarimveis, Haralambos

    2016-01-01

    Decision trees are renowned in the computational chemistry and machine learning communities for their interpretability. Their capacity and usage are somewhat limited by the fact that they normally work on categorical data. Improvements to known decision tree algorithms are usually carried out by increasing and tweaking parameters, as well as the post-processing of the class assignment. In this work we attempted to tackle both these issues. Firstly, conditional mutual information was used as the criterion for selecting the attribute on which to split instances. The algorithm performance was compared with the results of C4.5 (WEKA's J48) using default parameters and no restrictions. Two datasets were used for this purpose, DrugBank compounds for HRH1 binding prediction and Traditional Chinese Medicine formulation predicted bioactivities for therapeutic class annotation. Secondly, an automated binning method for continuous data was evaluated, namely Scott's normal reference rule, in order to allow any decision tree to easily handle continuous data. This was applied to all approved drugs in DrugBank for predicting the RDKit SLogP property, using the remaining RDKit physicochemical attributes as input.

  3. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  4. When palliative treatment achieves more than palliation: Instances of long-term survival after palliative radiotherapy

    Directory of Open Access Journals (Sweden)

    Madhup Rastogi

    2012-01-01

    Full Text Available Context: Palliative radiotherapy aims at symptom alleviation and improvement of quality of life. It may be effective in conferring a reasonable quantum of local control, as well as possibly prolonging survival on the short term. However, there can be rare instances where long-term survival, or even cure, results from palliative radiotherapy, which mostly uses sub-therapeutic doses. Aim: To categorize and characterize the patients with long-term survival and/or cure after palliative radiotherapy. Materials and Methods: This study is a retrospective analysis of hospital records of patients treated with palliative radiotherapy from 2001 to 2006 at the Regional Cancer Centre, Shimla. Results: Of the analyzed 963 patients who received palliative radiotherapy, 2.4% (n = 23 survived at least 5 years, with a large majority of these surviving patients (73.9%, n = 17 being free of disease. Conclusions: In addition to providing valuable symptom relief, palliative radiotherapy utilizing sub-therapeutic doses may, in a small proportion of patients, bestow long-term survival, and possibly cure. Rationally, such a favorable, but rare outcome cannot be expected with supportive care alone.

  5. The boundaries of instance-based learning theory for explaining decisions from experience.

    Science.gov (United States)

    Gonzalez, Cleotilde

    2013-01-01

    Most demonstrations of how people make decisions in risky situations rely on decisions from description, where outcomes and their probabilities are explicitly stated. But recently, more attention has been given to decisions from experience where people discover these outcomes and probabilities through exploration. More importantly, risky behavior depends on how decisions are made (from description or experience), and although prospect theory explains decisions from description, a comprehensive model of decisions from experience is yet to be found. Instance-based learning theory (IBLT) explains how decisions are made from experience through interactions with dynamic environments (Gonzalez et al., 2003). The theory has shown robust explanations of behavior across multiple tasks and contexts, but it is becoming unclear what the theory is able to explain and what it does not. The goal of this chapter is to start addressing this problem. I will introduce IBLT and a recent cognitive model based on this theory: the IBL model of repeated binary choice; then I will discuss the phenomena that the IBL model explains and those that the model does not. The argument is for the theory's robustness but also for clarity in terms of concrete effects that the theory can or cannot account for. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A Fisher Kernel Approach for Multiple Instance Based Object Retrieval in Video Surveillance

    Directory of Open Access Journals (Sweden)

    MIRONICA, I.

    2015-11-01

    Full Text Available This paper presents an automated surveillance system that exploits the Fisher Kernel representation in the context of multiple-instance object retrieval task. The proposed algorithm has the main purpose of tracking a list of persons in several video sources, using only few training examples. In the first step, the Fisher Kernel representation describes a set of features as the derivative with respect to the log-likelihood of the generative probability distribution that models the feature distribution. Then, we learn the generative probability distribution over all features extracted from a reduced set of relevant frames. The proposed approach shows significant improvements and we demonstrate that Fisher kernels are well suited for this task. We demonstrate the generality of our approach in terms of features by conducting an extensive evaluation with a broad range of keypoints features. Also, we evaluate our method on two standard video surveillance datasets attaining superior results comparing to state-of-the-art object recognition algorithms.

  7. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  8. Instance-based Policy Learning by Real-coded Genetic Algorithms and Its Application to Control of Nonholonomic Systems

    Science.gov (United States)

    Miyamae, Atsushi; Sakuma, Jun; Ono, Isao; Kobayashi, Shigenobu

    The stabilization control of nonholonomic systems have been extensively studied because it is essential for nonholonomic robot control problems. The difficulty in this problem is that the theoretical derivation of control policy is not necessarily guaranteed achievable. In this paper, we present a reinforcement learning (RL) method with instance-based policy (IBP) representation, in which control policies for this class are optimized with respect to user-defined cost functions. Direct policy search (DPS) is an approach for RL; the policy is represented by parametric models and the model parameters are directly searched by optimization techniques including genetic algorithms (GAs). In IBP representation an instance consists of a state and an action pair; a policy consists of a set of instances. Several DPSs with IBP have been previously proposed. In these methods, sometimes fail to obtain optimal control policies when state-action variables are continuous. In this paper, we present a real-coded GA for DPSs with IBP. Our method is specifically designed for continuous domains. Optimization of IBP has three difficulties; high-dimensionality, epistasis, and multi-modality. Our solution is designed for overcoming these difficulties. The policy search with IBP representation appears to be high-dimensional optimization; however, instances which can improve the fitness are often limited to active instances (instances used for the evaluation). In fact, the number of active instances is small. Therefore, we treat the search problem as a low dimensional problem by restricting search variables only to active instances. It has been commonly known that functions with epistasis can be efficiently optimized with crossovers which satisfy the inheritance of statistics. For efficient search of IBP, we propose extended crossover-like mutation (extended XLM) which generates a new instance around an instance with satisfying the inheritance of statistics. For overcoming multi-modality, we

  9. Salience Assignment for Multiple-Instance Data and Its Application to Crop Yield Prediction

    Science.gov (United States)

    Wagstaff, Kiri L.; Lane, Terran

    2010-01-01

    An algorithm was developed to generate crop yield predictions from orbital remote sensing observations, by analyzing thousands of pixels per county and the associated historical crop yield data for those counties. The algorithm determines which pixels contain which crop. Since each known yield value is associated with thousands of individual pixels, this is a multiple instance learning problem. Because individual crop growth is related to the resulting yield, this relationship has been leveraged to identify pixels that are individually related to corn, wheat, cotton, and soybean yield. Those that have the strongest relationship to a given crop s yield values are most likely to contain fields with that crop. Remote sensing time series data (a new observation every 8 days) was examined for each pixel, which contains information for that pixel s growth curve, peak greenness, and other relevant features. An alternating-projection (AP) technique was used to first estimate the "salience" of each pixel, with respect to the given target (crop yield), and then those estimates were used to build a regression model that relates input data (remote sensing observations) to the target. This is achieved by constructing an exemplar for each crop in each county that is a weighted average of all the pixels within the county; the pixels are weighted according to the salience values. The new regression model estimate then informs the next estimate of the salience values. By iterating between these two steps, the algorithm converges to a stable estimate of both the salience of each pixel and the regression model. The salience values indicate which pixels are most relevant to each crop under consideration.

  10. Cyber situation awareness: modeling detection of cyber attacks with instance-based learning theory.

    Science.gov (United States)

    Dutt, Varun; Ahn, Young-Suk; Gonzalez, Cleotilde

    2013-06-01

    To determine the effects of an adversary's behavior on the defender's accurate and timely detection of network threats. Cyber attacks cause major work disruption. It is important to understand how a defender's behavior (experience and tolerance to threats), as well as adversarial behavior (attack strategy), might impact the detection of threats. In this article, we use cognitive modeling to make predictions regarding these factors. Different model types representing a defender, based on Instance-Based Learning Theory (IBLT), faced different adversarial behaviors. A defender's model was defined by experience of threats: threat-prone (90% threats and 10% nonthreats) and nonthreat-prone (10% threats and 90% nonthreats); and different tolerance levels to threats: risk-averse (model declares a cyber attack after perceiving one threat out of eight total) and risk-seeking (model declares a cyber attack after perceiving seven threats out of eight total). Adversarial behavior is simulated by considering different attack strategies: patient (threats occur late) and impatient (threats occur early). For an impatient strategy, risk-averse models with threat-prone experiences show improved detection compared with risk-seeking models with nonthreat-prone experiences; however, the same is not true for a patient strategy. Based upon model predictions, a defender's prior threat experiences and his or her tolerance to threats are likely to predict detection accuracy; but considering the nature of adversarial behavior is also important. Decision-support tools that consider the role of a defender's experience and tolerance to threats along with the nature of adversarial behavior are likely to improve a defender's overall threat detection.

  11. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  12. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  13. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  14. Multiple priming instances increase the impact of practice-based but not verbal code-based stimulus-response associations.

    Science.gov (United States)

    Pfeuffer, Christina U; Moutsopoulou, Karolina; Waszak, Florian; Kiesel, Andrea

    2017-05-13

    Stimulus-response (S-R) associations, the basis of learning and behavioral automaticity, are formed by the (repeated) co-occurrence of stimuli and responses and render stimuli able to automatically trigger associated responses. The strength and behavioral impact of these S-R associations increases with the number of priming instances (i.e., practice). Here we investigated whether multiple priming instances of a special form of instruction, verbal coding, also lead to the formation of stronger S-R associations in comparison to a single instance of priming. Participants either actively classified stimuli or passively attended to verbal codes denoting responses once or four times before S-R associations were probed. We found that whereas S-R associations formed on the basis of active task execution (i.e., practice) were strengthened by multiple priming instances, S-R associations formed on the basis of verbal codes (i.e., instruction) did not benefit from additional priming instances. These findings indicate difference in the mechanisms underlying the encoding and/or retrieval of previously executed and verbally coded S-R associations. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. NoSQL Databases

    OpenAIRE

    PANYKO, Tomáš

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  16. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  17. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  18. Gaussian Multiple Instance Learning Approach for Mapping the Slums of the World Using Very High Resolution Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL

    2013-01-01

    In this paper, we present a computationally efficient algo- rithm based on multiple instance learning for mapping infor- mal settlements (slums) using very high-resolution remote sensing imagery. From remote sensing perspective, infor- mal settlements share unique spatial characteristics that dis- tinguish them from other urban structures like industrial, commercial, and formal residential settlements. However, regular pattern recognition and machine learning methods, which are predominantly single-instance or per-pixel classi- fiers, often fail to accurately map the informal settlements as they do not capture the complex spatial patterns. To overcome these limitations we employed a multiple instance based machine learning approach, where groups of contigu- ous pixels (image patches) are modeled as generated by a Gaussian distribution. We have conducted several experi- ments on very high-resolution satellite imagery, represent- ing four unique geographic regions across the world. Our method showed consistent improvement in accurately iden- tifying informal settlements.

  19. ECG and XML: an instance of a possible XML schema for the ECG telemonitoring.

    Science.gov (United States)

    Di Giacomo, Paola; Ricci, Fabrizio L

    2005-03-01

    Management of many types of chronic diseases relies heavily on patients' self-monitoring of their disease conditions. In recent years, Internet-based home telemonitoring systems allowing transmission of patient data to a central database and offering immediate access to the data by the care providers have become available. The adoption of Extensible Mark-up Language (XML) as a W3C standard has generated considerable interest in the potential value of this language in health informatics. However, the telemonitoring systems often work with only one or a few types of medical devices. This is because different medical devices produce different types of data, and the existing telemonitoring systems are generally built around a proprietary data schema. In this paper, we describe a generic data schema for a telemonitoring system that is applicable to different types of medical devices and different diseases, and then we present an architecture for the exchange of clinical information as data, signals of telemonitoring and clinical reports in the XML standard, up-to-date information in each electronic patient record and integration in real time with the information collected during the telemonitoring activities in the XML schema, between all the structures involved in the healthcare process of the patient.

  20. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  1. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  2. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  3. Oracle database systems administration

    OpenAIRE

    Šilhavý, Dominik

    2017-01-01

    Master's thesis with the name Oracle database systems administration describes problems in databases and how to solve them, which is important for database administrators. It helps them in delivering faster solutions without the need to look for or figure out solutions on their own. Thesis describes database backup and recovery methods that are closely related to problems solutions. The main goal is to provide guidance and recommendations regarding database troubles and how to solve them. It ...

  4. A Pareto-based Ensemble with Feature and Instance Selection for Learning from Multi-Class Imbalanced Datasets.

    Science.gov (United States)

    Fernández, Alberto; Carmona, Cristobal José; José Del Jesus, María; Herrera, Francisco

    2017-09-01

    Imbalanced classification is related to those problems that have an uneven distribution among classes. In addition to the former, when instances are located into the overlapped areas, the correct modeling of the problem becomes harder. Current solutions for both issues are often focused on the binary case study, as multi-class datasets require an additional effort to be addressed. In this research, we overcome these problems by carrying out a combination between feature and instance selections. Feature selection will allow simplifying the overlapping areas easing the generation of rules to distinguish among the classes. Selection of instances from all classes will address the imbalance itself by finding the most appropriate class distribution for the learning task, as well as possibly removing noise and difficult borderline examples. For the sake of obtaining an optimal joint set of features and instances, we embedded the searching for both parameters in a Multi-Objective Evolutionary Algorithm, using the C4.5 decision tree as baseline classifier in this wrapper approach. The multi-objective scheme allows taking a double advantage: the search space becomes broader, and we may provide a set of different solutions in order to build an ensemble of classifiers. This proposal has been contrasted versus several state-of-the-art solutions on imbalanced classification showing excellent results in both binary and multi-class problems.

  5. Multimedia database design and architecture in radiology communications

    Science.gov (United States)

    Karmouch, Ahmed; Georganas, Nicolas D.; Goldberg, Morris

    1990-08-01

    Multimedia database is a fundamental component in the Multimedia Radiology Information System. It provides mean by which different classes of documents (diagnostic reports, images, patient information, etc), involving several media (text, image, graphics, voice), are structured, represented, stored, manipulated and retrieved. In this paper, we first present the requirements for a multimedia database along with the data model used to describe the radiology information and procedures. Then, we describe our approach to the multimedia database design and architecture, and the corresponding prototype instance in terms of implementation and storage strategies. Finally, we give a conclusion with an overview of the research undertaken for enhanced multimedia data management

  6. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RED Database Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti...on The Rice Expression Database (RED) is a database that aggregates the gene expr...icroarray Project and other research groups. Features and manner of utilization of database

  7. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the

  8. Kentucky geotechnical database.

    Science.gov (United States)

    2005-03-01

    Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...

  9. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  10. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  11. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  12. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  13. Database Urban Europe

    NARCIS (Netherlands)

    Sleutjes, B.; de Valk, H.A.G.

    2016-01-01

    Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).

  14. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  15. Scopus database: a review

    OpenAIRE

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  16. Shapes of reaction-time distributions and shapes of learning curves: a test of the instance theory of automaticity.

    Science.gov (United States)

    Logan, G D

    1992-09-01

    The instance theory assumes that automatic performance is based on single-step direct-access retrieval from memory of prior solutions to present problems. The theory predicts that the shape of the learning curve depends on the shape of the distribution of retrieval times. One can deduce from the fundamental assumptions of the theory that (1) the entire distribution of reaction times, not just the mean, will decrease as a power function of practice; (2) asymptotically, the retrieval-time distribution must be a Weibull distribution; and (3) the exponent of the Weibull, which is the parameter that determines its shape, must be the reciprocal of the exponent of the power function. These predictions were tested and mostly confirmed in 12 data sets from 2 experiments. The ability of the instance theory to predict the power law is contrasted with the ability of other theories to account for it.

  17. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  18. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  19. Nuclear power economic database

    International Nuclear Information System (INIS)

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  20. Protein sequence databases.

    Science.gov (United States)

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium.

  1. Visualization of multidimensional database

    Science.gov (United States)

    Lee, Chung

    2008-01-01

    The concept of multidimensional databases has been extensively researched and wildly used in actual database application. It plays an important role in contemporary information technology, but due to the complexity of its inner structure, the database design is a complicated process and users are having a hard time fully understanding and using the database. An effective visualization tool for higher dimensional information system helps database designers and users alike. Most visualization techniques focus on displaying dimensional data using spreadsheets and charts. This may be sufficient for the databases having three or fewer dimensions but for higher dimensions, various combinations of projection operations are needed and a full grasp of total database architecture is very difficult. This study reviews existing visualization techniques for multidimensional database and then proposes an alternate approach to visualize a database of any dimension by adopting the tool proposed by Kiviat for software engineering processes. In this diagramming method, each dimension is represented by one branch of concentric spikes. This paper documents a C++ based visualization tool with extensive use of OpenGL graphics library and GUI functions. Detailed examples of actual databases demonstrate the feasibility and effectiveness in visualizing multidimensional databases.

  2. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPD Database Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...AGE) reference maps. Features and manner of utilization of database Proteins extracted from organs and subce

  3. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ASTRA Database Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...mes. Features and manner of utilization of database This database enables to sear...ch and represent alternative splicing/transcriptional initiation genes and their patterns (ex: cassette) base

  4. Highlights of the HITRAN2016 database

    Science.gov (United States)

    Gordon, I.; Rothman, L. S.; Hill, C.; Kochanov, R. V.; Tan, Y.

    2016-12-01

    The HITRAN2016 database will be released just before the AGU meeting. It is a titanic effort of world-wide collaboration between experimentalists, theoreticians and atmospheric scientists, who measure, calculate and validate the HITRAN data. The line-by-line lists for almost all of the HITRAN molecules were updated in comparison with the previous compilation HITRAN2012 [1] that has been in use, along with some intermediate updates, since 2012. The extent of the updates ranges from updating a few lines of certain molecules to complete replacements of the lists and introduction of additional isotopologues. Many more vibrational bands were added to the database, extending the spectral coverage and completeness of the datasets. For several molecules, including H2O, CO2 and CH4, the extent of the updates is so complex that separate task groups were assembled to make strategic decisions about the choices of sources for various parameters in different spectral regions. The amount of parameters has also been significantly increased, now incorporating, for instance, non-Voigt line profiles [2]; broadening by gases other than air and "self" [3]; and other phenomena, including line mixing. In addition, the amount of cross-sectional sets in the database has increased dramatically and includes many recent experiments as well as adaptation of the existing databases that were not in HITRAN previously (for instance the PNNL database [4]). The HITRAN2016 edition takes full advantage of the new structure and interface available at www.hitran.org [5] and the HITRAN Application Programming Interface [6]. This poster will provide a summary of the updates, emphasizing details of some of the most important or dramatic improvements. The users of the database will have an opportunity to discuss the updates relevant to their research and request a demonstration on how to work with the database. This work is supported by the NASA PATM (NNX13AI59G), PDART (NNX16AG51G) and AURA (NNX14AI55G

  5. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonom...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database... is a database providing the comprehensive information of proteins that is effective t

  6. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n...ame Arabidopsis Phenome Database Alternative name - DOI 10.18908/lsdba.nbdc01509-000 Creator Creator Name: H... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database

  7. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  8. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  9. AMDD: Antimicrobial Drug Database

    OpenAIRE

    Danishuddin, Mohd; Kaushal, Lalima; Hassan Baig, Mohd; Khan, Asad U.

    2012-01-01

    Drug resistance is one of the major concerns for antimicrobial chemotherapy against any particular target. Knowledge of the primary structure of antimicrobial agents and their activities is essential for rational drug design. Thus, we developed a comprehensive database, anti microbial drug database (AMDD), of known synthetic antibacterial and antifungal compounds that were extracted from the available literature and other chemical databases, e.g., PubChem, PubChem BioAssay and ZINC, etc. The ...

  10. Molecular Biology Database List.

    Science.gov (United States)

    Burks, C

    1999-01-01

    Molecular Biology Database List (MBDL) includes brief descriptions and pointers to Web sites for the various databases described in this issue as well as other Web sites presenting data sets relevant to molecular biology. This information is compiled into a list (http://www.oup.co.uk/nar/Volume_27/Issue_01/summary/ gkc105_gml.html) which includes links both to source Web sites and to on-line versions of articles describing the databases. PMID:9847130

  11. Theory, For Instance

    DEFF Research Database (Denmark)

    Balle, Søren Hattesen

    the task of writing a personified portrait of theory. Theory emerges as always beside(s) itself in what constitutes its style, but the poem also suggests that theory’s style is what gives theory both its power and its contingency. Figured as a duchess Theoria is only capable of retaining her power......This paper takes its starting point in a short poem by Wallace Stevens from 1917, which incidentally bears the title “Theory”. The poem can be read as a parable of theory, i.e., as something literally ’thrown beside’ theory (cf. OED: “...“). In the philosophical tradition this is also how the style of theory has been figured, that is to say: as something that is incidental to it or just happens to be around as so much paraphernalia. In my reading of Stevens’ poem I shall argue that this is exactly the position from which Stevens takes off when he assumes...

  12. Eurochemic: failure or instance

    International Nuclear Information System (INIS)

    Busekist, O. von

    1982-01-01

    In this article, the author draws up a balance sheet of twenty-three years of good and bad luck of the European venture Eurochemic. It turns out that an important number of errors of judgement are the source of the present difficult situation of the European market for the nuclear reprocessing. (AF)

  13. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  14. LOWELL OBSERVATORY COMETARY DATABASE

    Data.gov (United States)

    National Aeronautics and Space Administration — The database presented here is comprised entirely of observations made utilizing conventional photoelectric photometers and narrowband filters isolating 5 emission...

  15. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  16. The Relational Database Dictionary

    CERN Document Server

    J, C

    2006-01-01

    Avoid misunderstandings that can affect the design, programming, and use of database systems. Whether you're using Oracle, DB2, SQL Server, MySQL, or PostgreSQL, The Relational Database Dictionary will prevent confusion about the precise meaning of database-related terms (e.g., attribute, 3NF, one-to-many correspondence, predicate, repeating group, join dependency), helping to ensure the success of your database projects. Carefully reviewed for clarity, accuracy, and completeness, this authoritative and comprehensive quick-reference contains more than 600 terms, many with examples, covering i

  17. Key health indicators database.

    Science.gov (United States)

    Menic, J L

    1990-01-01

    A new database developed by the Canadian Centre for Health Information (CCHI) contains 40 key health indicators and lets users select a range of disaggregations, categories and variables. The database can be accessed through CANSIM, Statistics Canada's electronic database and retrieval system, or through a package for personal computers. This package includes the database on diskettes, as well as software for retrieving and manipulating data and for producing graphics. A data dictionary, a user's guide and tables and graphs that highlight aspects of each indicator are also included.

  18. Intermodal Passenger Connectivity Database -

    Data.gov (United States)

    Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...

  19. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  20. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  1. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  2. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  3. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  4. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  5. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  6. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG Database Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database... description This database contains information on the rice mitochondrial genome. You ca...sis results. Features and manner of utilization of database The mitochondrial genome information can be used

  7. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us JSNP Database Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database... classification Human Genes and Diseases - General polymorphism databases Organism Taxonomy Name: Homo ...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat... and manner of utilization of database Allele frequencies in Japanese populatoin are also available. License

  8. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27... Arabidopsis Phenome Database English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  9. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Upda...te History of This Database Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  10. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  11. Enhancing navigation in biomedical databases by community voting and database-driven text classification.

    Science.gov (United States)

    Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph

    2009-10-03

    The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well

  12. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning.

    Science.gov (United States)

    Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong

    2017-06-01

    Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.

  13. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  14. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  15. The international spinach database

    NARCIS (Netherlands)

    Treuren, van R.; Menting, F.B.J.

    2007-01-01

    The database concentrates on passport data of spinach of germplasm collections worldwide. All available passport data of accessions included in the International Spinach Database are downloadable as zipped Excel file. This zip file also contains the decoding tables, except for the FAO institutes

  16. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  17. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  18. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  19. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...

  20. Consumer Product Category Database

    Science.gov (United States)

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  1. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  2. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  3. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DGBY Database Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...sion and function of Baker's yeast). Features and manner of utilization of database This database

  4. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  5. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  6. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  7. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  8. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SAHG Database Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...Protein structure Human and other Vertebrate Genomes - Human ORFs Protein sequence database...s - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description...42,577 domain-structure models in ~24900 unique human protein sequences from the RefSeq database. Features a

  9. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Database Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database... description The purpose of this database is to represent the relationship between p... Features and manner of utilization of database - License CC BY-SA Detail Background and funding - Reference...(s) Article title: PSCDB: a database for protein structural change upon ligand binding. Author name(s): T. A

  10. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE Database Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database... description PLACE is a database of motifs found in plant cis-acting regulatory DNA elements base...that have been identified in these motifs in other genes or in other plant species in later publications. The database

  11. Functionally Graded Materials Database

    Science.gov (United States)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  12. Marine Jurisdictions Database

    National Research Council Canada - National Science Library

    Goldsmith, Roger

    1998-01-01

    The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...

  13. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  14. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  15. The Danish Melanoma Database

    DEFF Research Database (Denmark)

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva

    2016-01-01

    AIM OF DATABASE: The aim of the database is to monitor and improve the treatment and survival of melanoma patients. STUDY POPULATION: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive......-node-metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a), results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence......, nature, and treatment hereof is registered. In case of death, the cause and date are included. Currently, all data are entered manually; however, data catchment from the existing registries is planned to be included shortly. DESCRIPTIVE DATA: The DMD is an old research database, but new as a clinical...

  16. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation......, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database...

  17. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... for gynecological cancer. STUDY POPULATION: DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. MAIN VARIABLES: DGCD data are organized within separate...... is the registration of oncological treatment data, which is incomplete for a large number of patients. CONCLUSION: The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation...

  18. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  19. Atomicity for XML Databases

    Science.gov (United States)

    Biswas, Debmalya; Jiwane, Ashwin; Genest, Blaise

    With more and more data stored into XML databases, there is a need to provide the same level of failure resilience and robustness that users have come to expect from relational database systems. In this work, we discuss strategies to provide the transactional aspect of atomicity to XML databases. The main contribution of this paper is to propose a novel approach for performing updates-in-place on XML databases, with the undo statements stored in the same high level language as the update statements. Finally, we give experimental results to study the performance/storage trade-off of the updates-in-place strategy (based on our undo proposal) against the deferred updates strategy to providing atomicity.

  20. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  1. Ganymede Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 150 major impact craters on Ganymede and is updated semi-regularly based on continuing analysis...

  2. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  3. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  4. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  5. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  6. Database for West Africa

    African Journals Online (AJOL)

    NCRS USDA English Morphology and analytical. ISIS ISRIC English ..... problems. The compilation of the database cannot be carried out without adequate funding It also needs a strong and firm management. It is important that all participants ...

  7. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  8. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  9. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  10. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  11. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  12. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  13. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...

  14. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  15. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  16. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  17. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  18. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  19. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  20. Callisto Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 150 major impact craters on Callisto and is updated semi-regularly based on continuing analysis...

  1. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  2. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  3. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  4. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...... in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public...

  5. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  6. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  7. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  8. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  9. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  10. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    OpenAIRE

    Guijarro, Manuel; Gaspar, Ruben

    2007-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two p...

  11. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KOME Database Description General information of database Database name KOME Alternative name Knowledge-base... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...ngth cDNA project is shown in the database. The full-length cDNA clones were collected from various tissues ...treated under various stress conditions. The database contains not only information about complete nucleotid

  12. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    Jervis, P.; Rutter, P.

    1988-09-01

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  13. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  14. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The datab......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...... in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed....

  15. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi...cus Taxonomy ID: 10116 Database description On the pathological image database, over 53,000 high-resolution

  16. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  17. An Early Instance of Upper Palaeolithic Personal Ornamentation from China: The Freshwater Shell Bead from Shuidonggou 2.

    Directory of Open Access Journals (Sweden)

    Yi Wei

    Full Text Available We report the discovery and present a detailed analysis of a freshwater bivalve from Shuidonggou Locality 2, layer CL3. This layer is located c. 40 cm below layer CL2, which has yielded numerous ostrich eggshell beads. The shell is identified as the valve of a Corbicula fluminea. Data on the occurrence of this species in the Shuidonggou region during Marine Isotope Stage 3 and taphonomic analysis, conducted in the framework of this study, of a modern biocoenosis and thanatocoenosis suggest that the archeological specimen was collected at one of the numerous fossil or sub-fossil outcrops where valves of this species were available at the time of occupation of level CL3. Experimental grinding and microscopic analysis of modern shells of the same species indicate that the Shuidonggou shell was most probably ground on coarse sandstone to open a hole on its umbo, attach a thread, and use the valve as a personal ornament. Experimental engraving of freshwater shells and microscopic analysis identify an incision crossing the archaeological valve outer surface as possible deliberate engraving. Reappraisal of the site chronology in the light of available radiocarbon evidence suggests an age of at least 34-33 cal kyr BP for layer CL3. Such estimate makes the C. fluminea recovered from CL3 one of the earliest instances of personal ornamentation and the earliest example of a shell bead from China.

  18. An Early Instance of Upper Palaeolithic Personal Ornamentation from China: The Freshwater Shell Bead from Shuidonggou 2

    Science.gov (United States)

    Wei, Yi; d’Errico, Francesco; Vanhaeren, Marian; Li, Feng; Gao, Xing

    2016-01-01

    We report the discovery and present a detailed analysis of a freshwater bivalve from Shuidonggou Locality 2, layer CL3. This layer is located c. 40 cm below layer CL2, which has yielded numerous ostrich eggshell beads. The shell is identified as the valve of a Corbicula fluminea. Data on the occurrence of this species in the Shuidonggou region during Marine Isotope Stage 3 and taphonomic analysis, conducted in the framework of this study, of a modern biocoenosis and thanatocoenosis suggest that the archeological specimen was collected at one of the numerous fossil or sub-fossil outcrops where valves of this species were available at the time of occupation of level CL3. Experimental grinding and microscopic analysis of modern shells of the same species indicate that the Shuidonggou shell was most probably ground on coarse sandstone to open a hole on its umbo, attach a thread, and use the valve as a personal ornament. Experimental engraving of freshwater shells and microscopic analysis identify an incision crossing the archaeological valve outer surface as possible deliberate engraving. Reappraisal of the site chronology in the light of available radiocarbon evidence suggests an age of at least 34–33 cal kyr BP for layer CL3. Such estimate makes the C. fluminea recovered from CL3 one of the earliest instances of personal ornamentation and the earliest example of a shell bead from China. PMID:27227330

  19. Impact of extracorporeal shock waves on the human skin with cellulite: A case study of an unique instance

    Directory of Open Access Journals (Sweden)

    Christoph Kuhn

    2008-03-01

    Full Text Available Christoph Kuhn1, Fiorenzo Angehrn1, Ortrud Sonnabend2, Axel Voss31Klinik Piano, Biel, Switzerland; 2Pathodiagnostics, Herisau, Switzerland; 3SwiTech Medical AG, Kreuzlingen, SwitzerlandAbstract: In this case study of an unique instance, effects of medium-energy, high-focused extracorporeal generated shock waves (ESW onto the skin and the underlying fat tissue of acellulite afflicted, 50-year-old woman were investigated. The treatment consisted of four ESW applications within 21 days. Diagnostic high-resolution ultrasound (Collagenoson was performed before and after treatment. Directly after the last ESW application, skin samples were taken for histopathological analysis from the treated and from the contra-lateral untreated area of skin with cellulite. No damage to the treated skin tissue, in particular no mechanical destruction to the subcutaneous fat, could be demonstrated by histopathological analysis. However an astounding induction of neocollageno- and neoelastino-genesis within the scaffolding fabric of the dermis and subcutis was observed. The dermis increased in thickness as well as the scaffolding within the subcutaneous fat-tissue. Optimization of critical application parameters may turn ESW into a noninvasive cellulite therapy.Keywords: cellulite, extracellular matrix, fat tissue, high-resolution ultrasound of skin, extracorporeal shock wave, histopathology, scaffolding of subcutaneous connective tissue

  20. Impact of extracorporeal shock waves on the human skin with cellulite: A case study of an unique instance

    Science.gov (United States)

    Kuhn, Christoph; Angehrn, Fiorenzo; Sonnabend, Ortrud; Voss, Axel

    2008-01-01

    In this case study of an unique instance, effects of medium-energy, high-focused extracorporeal generated shock waves (ESW) onto the skin and the underlying fat tissue of a cellulite afflicted, 50-year-old woman were investigated. The treatment consisted of four ESW applications within 21 days. Diagnostic high-resolution ultrasound (Collagenoson) was performed before and after treatment. Directly after the last ESW application, skin samples were taken for histopathological analysis from the treated and from the contra-lateral untreated area of skin with cellulite. No damage to the treated skin tissue, in particular no mechanical destruction to the subcutaneous fat, could be demonstrated by histopathological analysis. However an astounding induction of neocollageno- and neoelastino-genesis within the scaffolding fabric of the dermis and subcutis was observed. The dermis increased in thickness as well as the scaffolding within the subcutaneous fat-tissue. Optimization of critical application parameters may turn ESW into a noninvasive cellulite therapy. PMID:18488890

  1. Danish Pancreatic Cancer Database

    DEFF Research Database (Denmark)

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients...... with microscopically verified ductal adenocarcinoma of the pancreas have been registered in the database. As of June 30, 2014, the total number of patients registered was 2,217. All data are cross-referenced with the Danish Pathology Registry and the Danish Patient Registry to ensure the completeness of registrations....... MAIN VARIABLES: The main registered variables are patient demographics, performance status, diagnostic workup, histological and/or cytological diagnosis, and clinical tumor stage. The following data on treatment are registered: type of operation, date of first adjuvant, neoadjuvant, and first...

  2. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  3. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  4. The Danish Sarcoma Database

    DEFF Research Database (Denmark)

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  5. The PROSITE database.

    Science.gov (United States)

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S; Pagni, Marco; Sigrist, Christian J A

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at http://www.expasy.org/prosite/.

  6. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  7. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  8. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  9. PEP725 Pan European Phenological Database

    Science.gov (United States)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  10. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  11. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  12. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  13. C# Database Basics

    CERN Document Server

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  14. The Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004...... direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment...

  15. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  16. The Danish Depression Database

    DEFF Research Database (Denmark)

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. MAIN VARIABLES: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well...

  17. Yucca Mountain digital database

    International Nuclear Information System (INIS)

    Daudt, C.R.; Hinze, W.J.

    1992-01-01

    This paper discusses the Yucca Mountain Digital Database (DDB) which is a digital, PC-based geographical database of geoscience-related characteristics of the proposed high-level waste (HLW) repository site of Yucca Mountain, Nevada. It was created to provide the US Nuclear Regulatory Commission's (NRC) Advisory Committee on Nuclear Waste (ACNW) and its staff with a visual perspective of geological, geophysical, and hydrological features at the Yucca Mountain site as discussed in the Department of Energy's (DOE) pre-licensing reports

  18. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  19. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  20. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    , and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples.......Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval...

  1. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  2. MARKS ON ART database

    DEFF Research Database (Denmark)

    van Vlierden, Marieke; Wadum, Jørgen; Wolters, Margreet

    2016-01-01

    Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag....

  3. Relational database telemanagement.

    Science.gov (United States)

    Swinney, A R

    1988-05-01

    Dallas-based Baylor Health Care System recognized the need for a way to control and track responses to their marketing programs. To meet the demands of data management and analysis, and build a useful database of current customers and future prospects, the marketing department developed a system to capture, store and manage these responses.

  4. The CEBAF Element Database

    Energy Technology Data Exchange (ETDEWEB)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  5. From database to normbase

    NARCIS (Netherlands)

    Stamper, R.K.; Liu, Kecheng; Liu, K.; Kolkman, M.; Kolkman, M.; Klarenberg, P.; Ades, Y.; van Slooten, C.; van Slooten, F.; Ades, Y.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  6. The International Lactuca database

    NARCIS (Netherlands)

    Treuren, van R.; Menting, F.B.J.

    2014-01-01

    The International Lactuca Database includes accessions of species belonging to the genus Lactuca, but also a few accessions belonging to related genera. Passport data can be searched on-line or downloaded. Characterization and evaluation data can be accessed via the downloading section. Requests for

  7. Oversigt over databaser

    DEFF Research Database (Denmark)

    Krogh Graversen, Brian

    Dette er en oversigt over registre, som kan anvendes til at beslyse situationen og udviklingen på det sociale område. Oversigten er anden fase i et dataprojekt, som har til formål at etablere en database, som kan danne basis for en løbende overvågning, udredning, evaluering og forskning på det...

  8. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  9. Harmonization of Databases

    DEFF Research Database (Denmark)

    Charlifue, Susan; Tate, Denise; Biering-Sorensen, Fin

    2016-01-01

    The objectives of this article are to (1) provide an overview of existing spinal cord injury (SCI) clinical research databases-their purposes, characteristics, and accessibility to users; and (2) present a vision for future collaborations required for cross-cutting research in SCI. This vision hi...

  10. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  11. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPSD Database Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...max Taxonomy ID: 3847 Database description We have determined the three-dimensional structures of the protei

  12. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us GETDB Database Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...cription About 4,600 insertion lines of enhancer trap lines based on the Gal4-UAS

  13. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...atures and manner of utilization of database Protein-protein interaction data obtained by the comprehensive

  14. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  15. Firebird Database Backup by Serialized Database Table Dump

    OpenAIRE

    Ling, Maurice HT

    2007-01-01

    This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the ...

  16. The Danish Melanoma Database

    Directory of Open Access Journals (Sweden)

    Hölmich Lr

    2016-10-01

    Full Text Available Lisbet Rosenkrantz Hölmich,1 Siri Klausen,2 Eva Spaun,3 Grethe Schmidt,4 Dorte Gad,5 Inge Marie Svane,6,7 Henrik Schmidt,8 Henrik Frank Lorentzen,9 Else Helene Ibfelt10 1Department of Plastic Surgery, 2Department of Pathology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 3Institute of Pathology, Aarhus University Hospital, Aarhus, 4Department of Plastic and Reconstructive Surgery, Breast Surgery and Burns, Rigshospitalet – Glostrup, University of Copenhagen, Copenhagen, 5Department of Plastic Surgery, Odense University Hospital, Odense, 6Center for Cancer Immune Therapy, Department of Hematology, 7Department of Oncology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 8Department of Oncology, 9Department of Dermatology, Aarhus University Hospital, Aarhus, 10Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup – Rigshospitalet, University of Copenhagen, Glostrup, Denmark Aim of database: The aim of the database is to monitor and improve the treatment and survival of melanoma patients.Study population: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD. In 2014, 2,525 patients with invasive melanoma and 780 with in situ tumors were registered. The coverage is currently 93% compared with the Danish Pathology Register.Main variables: The main variables include demographic, clinical, and pathological characteristics, including Breslow’s tumor thickness, ± ulceration, mitoses, and tumor–node–metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a, results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence, nature, and treatment hereof is registered. In case of death, the cause and date

  17. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  18. Database on aircraft accidents

    International Nuclear Information System (INIS)

    Nishio, Masahide; Koriyama, Tamio

    2013-11-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to this issue, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for the latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. In this report the database was revised by adding aircraft accidents in 2011 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2012 database for the latest 20 years from 1992 to 2011. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for the latest 20 years from 1992 to 2011 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2012 revised database for the latest 20 years from 1992 to 2011 shows the followings. The trend of the 2012 database changes little as compared to the last year's report. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. The number of commercial aircraft accidents is 4 for large fixed-wing aircraft, 58 for small fixed-wing aircraft, 5 for large bladed aircraft and 99 for small bladed aircraft. The relevant accidents

  19. Danish Palliative Care Database

    DEFF Research Database (Denmark)

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all...... patients in Denmark referred to and/or in contact with SPC after January 1, 2010. Main variables: The main variables in DPD are data about referral for patients admitted and not admitted to SPC, type of the first SPC contact, clinical and sociodemographic factors, multidisciplinary conference...... patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness....

  20. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  1. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  2. Database on aircraft accidents

    International Nuclear Information System (INIS)

    Nishio, Masahide; Koriyama, Tamio

    2012-09-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to the report, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. This year, the database was revised by adding aircraft accidents in 2010 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2011 database for latest 20 years from 1991 to 2010. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for latest 20 years from 1991 to 2010 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2011 revised database for latest 20 years from 1991 to 2010 shows the followings. The trend of the 2011 database changes little as compared to the last year's one. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. 4 large fixed-wing aircraft accidents, 58 small fixed-wing aircraft accidents, 5 large bladed aircraft accidents and 114 small bladed aircraft accidents occurred. The relevant accidents for evaluating

  3. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  4. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  5. Developing customer databases.

    Science.gov (United States)

    Rao, S K; Shenbaga, S

    2000-01-01

    There is a growing consensus among pharmaceutical companies that more product and customer-specific approaches to marketing and selling a new drug can result in substantial increases in sales. Marketers and researchers taking a proactive micro-marketing approach to identifying, profiling, and communicating with target customers are likely to facilitate such approaches and outcomes. This article provides a working framework for creating customer databases that can be effectively mined to achieve a variety of such marketing and sales force objectives.

  6. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  7. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  8. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  9. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  10. The CYATAXO Database

    Czech Academy of Sciences Publication Activity Database

    Komárková, Jaroslava; Nedoma, Jiří

    2006-01-01

    Roč. 6, - (2006), s. 49-54 ISSN 1213-3434 R&D Projects: GA AV ČR(CZ) IAA6005308; GA AV ČR(CZ) IBS6017004 Grant - others:EC(XE) EVK2-CT-1999-00026 Institutional research plan: CEZ:AV0Z60170517 Keywords : Database CYATAXO * cyanobacteria * taxonomy * water- blooms Subject RIV: DJ - Water Pollution ; Quality

  11. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  12. A student database

    OpenAIRE

    Kemaloğlu, Turgut

    1990-01-01

    Ankara : Department of Management and Graduate School of Business Administration, Bilkent Univ., 1990. Thesis (Master's) -- Bilkent University, 1990. Includes bibliographical refences. Tiiis tfiesia is a design of student database systeia whicii will manage the data of university students. The aim of the pi"ogram is to obtain sorted lists of students according to / several parameters,/ to obtain frequency of grades for the specified course, to design a suitable sheet w...

  13. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  14. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  15. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  16. MEROPS: the peptidase database.

    Science.gov (United States)

    Rawlings, Neil D; Morton, Fraser R; Kok, Chai Yin; Kong, Jun; Barrett, Alan J

    2008-01-01

    Peptidases (proteolytic enzymes or proteases), their substrates and inhibitors are of great relevance to biology, medicine and biotechnology. The MEROPS database (http://merops.sanger.ac.uk) aims to fulfil the need for an integrated source of information about these. The organizational principle of the database is a hierarchical classification in which homologous sets of peptidases and protein inhibitors are grouped into protein species, which are grouped into families and in turn grouped into clans. Important additions to the database include newly written, concise text annotations for peptidase clans and the small molecule inhibitors that are outside the scope of the standard classification; displays to show peptidase specificity compiled from our collection of known substrate cleavages; tables of peptidase-inhibitor interactions; and dynamically generated alignments of representatives of each protein species at the family level. New ways to compare peptidase and inhibitor complements between any two organisms whose genomes have been completely sequenced, or between different strains or subspecies of the same organism, have been devised.

  17. Curcumin Resource Database

    Science.gov (United States)

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. Database URL: http://www.crdb.in PMID:26220923

  18. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. [Calm (James M.), Great Falls, VA (United States)

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  19. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  20. Interactive database management (IDM).

    Science.gov (United States)

    Othman, R

    1995-08-01

    Interactive database management (IDM) is a data editing software that provides complete data editing at the time of initial data entry when information is 'fresh at hand'. Under the new interactive system, initial data recording is subjected to instant data editing by the interactive computer software logic. Data are immediately entered in final form to the database and are available for analysis. IDM continuously checks all variables for acceptability, completeness, and consistency. IDM does not allow form duplication. Many functions including backups have been automated. The interactive system can export the database to other systems. The software has been implemented for two Department of Veterans Affairs Cooperative Studies (CCSHS #5 and CSP #385) which collect data for 1400 and 1000 variables, respectively at 28 VA medical centers. IDM is extremely user friendly and simple to operate. Researchers with no computer background can be trained quickly and easily to use the system. IDM is deployed on notebook microcomputers making it portable for use anywhere in the hospital setting.

  1. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  2. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  3. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DMPD Database Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects... pathway models of transcriptional regulation and signal transduction in CSML format for dymamic simulation base

  4. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  5. Completeness and validity in a national clinical thyroid cancer database

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Mathiesen, Jes Sloth; Krogdahl, Annelise

    2014-01-01

    BACKGROUND: Although a prospective national clinical thyroid cancer database (DATHYRCA) has been active in Denmark since January 1, 1996, no assessment of data quality has been performed. The purpose of the study was to evaluate completeness and data validity in the Danish national clinical thyroid...... was around or above 90% in most instances. Perfect agreement on the diagnosis of thyroid carcinoma was found, both inter- and intra-observer, and κ values of selected variables showed overall good to excellent agreement. CONCLUSION: In a setup with public health insurance, personal identity numbers...

  6. An XCT image database system

    International Nuclear Information System (INIS)

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  7. The Danish Fetal Medicine database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte; Kopp, Tine Iskov; Tabor, Ann

    2016-01-01

    trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units’Astraia databases to the central database via...... analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database...

  8. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  9. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  10. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  11. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  12. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available pliance with the terms and conditions of the license described below. The license s...List Contact us Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database in com

  13. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  14. Web Database Schema Identification through Simple Query Interface

    Science.gov (United States)

    Lin, Ling; Zhou, Lizhu

    Web databases provide different types of query interfaces to access the data records stored in the backend databases. While most existing works exploit a complex query interface with multiple input fields to perform schema identification of the Web databases, little attention has been paid on how to identify the schema of web databases by simple query interface (SQI), which has only one single query text input field. This paper proposes a new method of instance-based query probing to identify WDBs' interface and result schema for SQI. The interface schema identification problem is defined as generating the fullcondition query of SQI and a novel query probing strategy is proposed. The result schema is also identified based on the result webpages of SQI's full-condition query, and an extended identification of the non-query attributes is proposed to improve the attribute recall rate. Experimental results on web databases of online shopping for book, movie and mobile phone show that our method is effective and efficient.

  15. Curcumin Resource Database.

    Science.gov (United States)

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. © The Author(s) 2015. Published by Oxford University Press.

  16. Asbestos Exposure Assessment Database

    Science.gov (United States)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  17. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  18. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  19. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  20. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model,

  1. The USAID Environmental Compliance Database

    Data.gov (United States)

    US Agency for International Development — The Environmental Compliance Database is a record of environmental compliance submissions with their outcomes. Documents in the database can be found by visiting the...

  2. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  3. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  4. Household Products Database: Personal Care

    Science.gov (United States)

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2017 by DeLima Associates. All rights reserved. ...

  5. Firebird Database Backup by Serialized Database Table Dump

    Directory of Open Access Journals (Sweden)

    2008-06-01

    Full Text Available This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the output is in text-coded format (from marshal module making it more resilient than a compressed text backup, as in the case of using gbak.

  6. Hydrogen Leak Detection Sensor Database

    Science.gov (United States)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  7. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  8. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    Akishina, E.P.; Aleksandrov, E.I.; Aleksandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Zrelov, P.V.; Friese, V.; Mueller, W.

    2014-01-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  9. Database security in the cloud

    OpenAIRE

    Sakhi, Imal

    2012-01-01

    The aim of the thesis is to get an overview of the database services available in cloud computing environment, investigate the security risks associated with it and propose the possible countermeasures to minimize the risks. The thesis also analyzes two cloud database service providers namely; Amazon RDS and Xeround. The reason behind choosing these two providers is because they are currently amongst the leading cloud database providers and both provide relational cloud databases which makes ...

  10. Modeling Punching Shear Capacity of Fiber-Reinforced Polymer Concrete Slabs: A Comparative Study of Instance-Based and Neural Network Learning

    Directory of Open Access Journals (Sweden)

    Nhat-Duc Hoang

    2017-01-01

    Full Text Available This study investigates an adaptive-weighted instanced-based learning, for the prediction of the ultimate punching shear capacity (UPSC of fiber-reinforced polymer- (FRP- reinforced slabs. The concept of the new method is to employ the Differential Evolution to construct an adaptive instance-based regression model. The performance of the proposed model is compared to those of Artificial Neural Network (ANN and traditional formula-based methods. A dataset which contains the testing results of FRP-reinforced concrete slabs has been collected to establish and verify new approach. This study shows that the investigated instance-based regression model is capable of delivering the prediction result which is far more accurate than traditional formulas and very competitive with the black-box approach of ANN. Furthermore, the proposed adaptive-weighted instanced-based learning provides a means for quantifying the relevancy of each factor used for the prediction of UPSC of FRP-reinforced slabs.

  11. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  12. Database Programming Languages

    DEFF Research Database (Denmark)

    This volume contains the proceedings of the 11th International Symposium on Database Programming Languages (DBPL 2007), held in Vienna, Austria, on September 23-24, 2007. DBPL 2007 was one of 15 meetings co-located with VLBD (the International Conference on Very Large Data Bases). DBPL continues...... to present the very best work at the intersection of dataase and programming language research. The proceedings include a paper based on the invited talk by Wenfie Fan and the 16 contributed papers that were selected by at least three members of the program committee. In addition, the program commitee sought...... the opinions of additional referees selected becauce of their expertise on particular topics. The final selection of papers was made during last week of July. We would like to thank all of the aurhors who submitted papers to the conference, and the members of the program committee  for their excellent work...

  13. The Danish Testicular Cancer database

    DEFF Research Database (Denmark)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel

    2016-01-01

    AIM: The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC......) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. STUDY POPULATION: All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data...... collection has been performed from 1984 to 2007 and from 2013 onward, respectively. MAIN VARIABLES AND DESCRIPTIVE DATA: The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function...

  14. The Danish Testicular Cancer database.

    Science.gov (United States)

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  15. Conditions Database for the Belle II Experiment

    Science.gov (United States)

    Wood, L.; Elsethagen, T.; Schram, M.; Stephan, E.

    2017-10-01

    The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II conditions database was designed with a straightforward goal: make it as easily maintainable as possible. To this end, HEP-specific software tools were avoided as much as possible and industry standard tools used instead. HTTP REST services were selected as the application interface, which provide a high-level interface to users through the use of standard libraries such as curl. The application interface itself is written in Java and runs in an embedded Payara-Micro Java EE application server. Scalability at the application interface is provided by use of Hazelcast, an open source In-Memory Data Grid (IMDG) providing distributed in-memory computing and supporting the creation and clustering of new application interface instances as demand increases. The IMDG provides fast and efficient access to conditions data via in-memory caching.

  16. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  17. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ConfC Database Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... description This database extracts this dynamic information from the protein structure being obtaine...d now and is consist of three kinds of sub-database, that is, 1) evolutional stru

  18. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  19. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  20. Foundations of RDF Databases

    Science.gov (United States)

    Arenas, Marcelo; Gutierrez, Claudio; Pérez, Jorge

    The goal of this paper is to give an overview of the basics of the theory of RDF databases. We provide a formal definition of RDF that includes the features that distinguish this model from other graph data models. We then move into the fundamental issue of querying RDF data. We start by considering the RDF query language SPARQL, which is a W3C Recommendation since January 2008. We provide an algebraic syntax and a compositional semantics for this language, study the complexity of the evaluation problem for different fragments of SPARQL, and consider the problem of optimizing the evaluation of SPARQL queries, showing that a natural fragment of this language has some good properties in this respect. We furthermore study the expressive power of SPARQL, by comparing it with some well-known query languages such as relational algebra. We conclude by considering the issue of querying RDF data in the presence of RDFS vocabulary. In particular, we present a recently proposed extension of SPARQL with navigational capabilities.

  1. Integrating forensic information in a crime intelligence database.

    Science.gov (United States)

    Rossy, Quentin; Ioset, Sylvain; Dessimoz, Damien; Ribaux, Olivier

    2013-07-10

    Since 2008, intelligence units of six states of the western part of Switzerland have been sharing a common database for the analysis of high volume crimes. On a daily basis, events reported to the police are analysed, filtered and classified to detect crime repetitions and interpret the crime environment. Several forensic outcomes are integrated in the system such as matches of traces with persons, and links between scenes detected by the comparison of forensic case data. Systematic procedures have been settled to integrate links assumed mainly through DNA profiles, shoemarks patterns and images. A statistical outlook on a retrospective dataset of series from 2009 to 2011 of the database informs for instance on the number of repetition detected or confirmed and increased by forensic case data. Time needed to obtain forensic intelligence in regard with the type of marks treated, is seen as a critical issue. Furthermore, the underlying integration process of forensic intelligence into the crime intelligence database raised several difficulties in regards of the acquisition of data and the models used in the forensic databases. Solutions found and adopted operational procedures are described and discussed. This process form the basis to many other researches aimed at developing forensic intelligence models. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Afghan Air Force University: Contract Requirements Were Generally Met, but Instances of Non Compliance, Poor Workmanship, and Inadequate Maintenance Need to Be Addressed

    Science.gov (United States)

    2016-03-01

    USACE modified the contract price to account for these lower quality materials . With regards to the shower valves and head assemblies, the...shop drawings, material submissions, and test results linked to each definable features of work. USACE, in turn, failed to adequately monitor and...instances in which substitute construction materials were used without proper approval, resulting in at least $80,000 in potentially inappropriate cost

  3. Replikasi Unidirectional pada Heterogen Database

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2013-12-01

    Full Text Available The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the interaction process is done through repeated. From this research it is obtained that the database replication technolgy using Oracle Golden Gate can be applied in heterogeneous environments in real time as well.

  4. The IRPVM-DB database

    International Nuclear Information System (INIS)

    Davies, L.M.; Gillemot, F.; Yanko, L.; Lyssakov, V.

    1997-01-01

    The IRPVM-DB (International Reactor Pressure Vessel Material Database) initiated by the IAEA IWG LMNPP is going to collect the available surveillance and research data world-wide on RPV material ageing. This paper presents the purpose of the database; it summarizes the type and the relationship of data included; it gives information about the data access and protection; and finally, it summarizes the state of art of the database. (author). 1 ref., 2 figs

  5. Inorganic Crystal Structure Database (ICSD)

    Science.gov (United States)

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  6. GOTTCHA Database, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  7. The Stanford Microarray Database

    Science.gov (United States)

    Sherlock, Gavin; Hernandez-Boussard, Tina; Kasarskis, Andrew; Binkley, Gail; Matese, John C.; Dwight, Selina S.; Kaloper, Miroslava; Weng, Shuai; Jin, Heng; Ball, Catherine A.; Eisen, Michael B.; Spellman, Paul T.; Brown, Patrick O.; Botstein, David; Cherry, J. Michael

    2001-01-01

    The Stanford Microarray Database (SMD) stores raw and normalized data from microarray experiments, and provides web interfaces for researchers to retrieve, analyze and visualize their data. The two immediate goals for SMD are to serve as a storage site for microarray data from ongoing research at Stanford University, and to facilitate the public dissemination of that data once published, or released by the researcher. Of paramount importance is the connection of microarray data with the biological data that pertains to the DNA deposited on the microarray (genes, clones etc.). SMD makes use of many public resources to connect expression information to the relevant biology, including SGD [Ball,C.A., Dolinski,K., Dwight,S.S., Harris,M.A., Issel-Tarver,L., Kasarskis,A., Scafe,C.R., Sherlock,G., Binkley,G., Jin,H. et al. (2000) Nucleic Acids Res., 28, 77–80], YPD and WormPD [Costanzo,M.C., Hogan,J.D., Cusick,M.E., Davis,B.P., Fancher,A.M., Hodges,P.E., Kondu,P., Lengieza,C., Lew-Smith,J.E., Lingner,C. et al. (2000) Nucleic Acids Res., 28, 73–76], Unigene [Wheeler,D.L., Chappey,C., Lash,A.E., Leipe,D.D., Madden,T.L., Schuler,G.D., Tatusova,T.A. and Rapp,B.A. (2000) Nucleic Acids Res., 28, 10–14], dbEST [Boguski,M.S., Lowe,T.M. and Tolstoshev,C.M. (1993) Nature Genet., 4, 332–333] and SWISS-PROT [Bairoch,A. and Apweiler,R. (2000) Nucleic Acids Res., 28, 45–48] and can be accessed at http://genome-www.stanford.edu/microarray. PMID:11125075

  8. The flux database concerted action

    International Nuclear Information System (INIS)

    Mitchell, N.G.; Donnelly, C.E.

    1999-01-01

    This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)

  9. Cloud database development and management

    CERN Document Server

    Chao, Lee

    2013-01-01

    Nowadays, cloud computing is almost everywhere. However, one can hardly find a textbook that utilizes cloud computing for teaching database and application development. This cloud-based database development book teaches both the theory and practice with step-by-step instructions and examples. This book helps readers to set up a cloud computing environment for teaching and learning database systems. The book will cover adequate conceptual content for students and IT professionals to gain necessary knowledge and hands-on skills to set up cloud based database systems.

  10. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  11. Specialized microbial databases for inductive exploration of microbial genome sequences

    Directory of Open Access Journals (Sweden)

    Cabau Cédric

    2005-02-01

    Full Text Available Abstract Background The enormous amount of genome sequence data asks for user-oriented databases to manage sequences and annotations. Queries must include search tools permitting function identification through exploration of related objects. Methods The GenoList package for collecting and mining microbial genome databases has been rewritten using MySQL as the database management system. Functions that were not available in MySQL, such as nested subquery, have been implemented. Results Inductive reasoning in the study of genomes starts from "islands of knowledge", centered around genes with some known background. With this concept of "neighborhood" in mind, a modified version of the GenoList structure has been used for organizing sequence data from prokaryotic genomes of particular interest in China. GenoChore http://bioinfo.hku.hk/genochore.html, a set of 17 specialized end-user-oriented microbial databases (including one instance of Microsporidia, Encephalitozoon cuniculi, a member of Eukarya has been made publicly available. These databases allow the user to browse genome sequence and annotation data using standard queries. In addition they provide a weekly update of searches against the world-wide protein sequences data libraries, allowing one to monitor annotation updates on genes of interest. Finally, they allow users to search for patterns in DNA or protein sequences, taking into account a clustering of genes into formal operons, as well as providing extra facilities to query sequences using predefined sequence patterns. Conclusion This growing set of specialized microbial databases organize data created by the first Chinese bacterial genome programs (ThermaList, Thermoanaerobacter tencongensis, LeptoList, with two different genomes of Leptospira interrogans and SepiList, Staphylococcus epidermidis associated to related organisms for comparison.

  12. Low-cost mobile video-based iris recognition for small databases

    Science.gov (United States)

    Thomas, N. Luke; Du, Yingzi; Muttineni, Sriharsha; Mang, Shing; Sran, Dylan

    2009-05-01

    This paper presents a low-cost method for providing biometric verification for applications that do not require large database sizes. Existing portable iris recognition systems are typically self-contained and expensive. For some applications, low cost is more important than extremely discerning matching ability. In these instances, the proposed system could be implemented at low cost, with adequate matching performance for verification. Additionally, the proposed system could be used in conjunction with any image based biometric identification system. A prototype system was developed and tested on a small database, with promising preliminary results.

  13. Aging management database

    International Nuclear Information System (INIS)

    Vidican, Dan

    2003-01-01

    As operation time is accumulated, the overall safety and performance of NPP tend to decrease. The reasons for potential non-availability of the structures, Systems and Components (SCC) in operation, are various but they represent in different mode the end result of the ageing phenomena. In order to understand the ageing phenomena and to be able to take adequate countermeasures, it is necessary to accumulate a big amount of information, from worldwide and also from the own plant. These Data have to be organized in a systematic form, easy to retrieval and use. General requirements and structure of an Ageing DataBase Activities related to ageing evaluation have to allow: - Identification and evaluation of degradation phenomena, potential malfunction and failure mode of the plant typical components; - Trend analyses (on selected critical components), prediction of the future performance and the remaining service life. To perform these activities, it is necessary to have information on similar components behavior in different NPP (in different environment and different operating conditions) and also the results from different pilot studies. The knowledge of worldwide experience is worthwhile. Also, it is necessary to know very well the operating and environmental conditions in own NPP and to analyze in detail the failure mode and root cause for the components removed from the plant due to extended degradation. Based on the above aspects, one presents a proposal for the structure of an Ageing DataBase. It has three main sections: - Section A: General knowledge about ageing phenomena. It contain all the information collected based on the worldwide experience. It could have, a general part with crude information and a synthetic one, structured on typical components (if possible on different manufacturers). The synthetic part, has to consider different ageing aspects and different monitoring and evaluation methods (e. g. component, function, environment condition, specific

  14. The Danish Cardiac Rehabilitation Database

    DEFF Research Database (Denmark)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne

    2016-01-01

    AIM OF DATABASE: The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). STUDY POPULATION: Hospitalized patients with CHD with stenosis on coronary angiography treated with percutane...

  15. Alternative Databases for Anthropology Searching.

    Science.gov (United States)

    Brody, Fern; Lambert, Maureen

    1984-01-01

    Examines online search results of sample questions in several databases covering linguistics, cultural anthropology, and physical anthropology in order to determine if and where any overlap in results might occur, and which files have greatest number of relevant hits. Search results by database are given for each subject area. (EJS)

  16. Improvement of irradiation effects database

    International Nuclear Information System (INIS)

    Yuan Guohuo; Xu Xi; Jia Wenhai

    2003-01-01

    The design method of irradiation effects database is related in this paper. The structure of irradiation effects database is perfected by Delphi, query and calculation of the data have completed, and printing and outputting data report form have fulfilled. Therefore, data storage platform for reliability and vulnerability analyzing of harden irradiation effects of the component and system is offered

  17. The ENZYME database in 2000.

    Science.gov (United States)

    Bairoch, A

    2000-01-01

    The ENZYME database is a repository of information related to the nomenclature of enzymes. In recent years it has became an indispensable resource for the development of metabolic databases. The current version contains information on 3705 enzymes. It is available through the ExPASy WWW server (http://www.expasy.ch/enzyme/ ).

  18. Wind turbine reliability database update.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  19. Fuel element database: developer handbook

    International Nuclear Information System (INIS)

    Dragicevic, M.

    2004-09-01

    The fuel elements database which was developed for Atomic Institute of the Austrian Universities is described. The software uses standards like HTML, PHP and SQL. For the standard installation freely available software packages such as MySQL database or the PHP interpreter from Apache Software Foundation and Java Script were used. (nevyjel)

  20. Content independence in multimedia databases

    NARCIS (Netherlands)

    A.P. de Vries (Arjen)

    2001-01-01

    textabstractA database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. This article investigates the role of data management in multimedia digital libraries, and its implications for

  1. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content...-Type text/plain; charset=ISO-8859-1 ...

  2. Application of graph database for analytical tasks

    OpenAIRE

    Günzl, Richard

    2014-01-01

    This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to desi...

  3. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  4. The Danish Fetal Medicine Database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte K; Petersen, Olav B; Jørgensen, Finn S

    2015-01-01

    OBJECTIVE: To describe the establishment and organization of the Danish Fetal Medicine Database and to report national results of first-trimester combined screening for trisomy 21 in the 5-year period 2008-2012. DESIGN: National register study using prospectively collected first-trimester screening...... data from the Danish Fetal Medicine Database. POPULATION: Pregnant women in Denmark undergoing first-trimester screening for trisomy 21. METHODS: Data on maternal characteristics, biochemical and ultrasonic markers are continuously sent electronically from local fetal medicine databases (Astraia Gmbh......%. The national screen-positive rate increased from 3.6% in 2008 to 4.7% in 2012. The national detection rate of trisomy 21 was reported to be between 82 and 90% in the 5-year period. CONCLUSION: A national fetal medicine database has been successfully established in Denmark. Results from the database have shown...

  5. The magnet components database system

    International Nuclear Information System (INIS)

    Baggett, M.J.; Leedy, R.; Saltmarsh, C.; Tompkins, J.C.

    1990-01-01

    The philosophy, structure, and usage of MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. The data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 9 figs

  6. Database on wind characteristics. Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, K.S.

    2001-01-01

    The main objective of IEA R&D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured windfield time series observed in a wide range...... of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R&D Annex XVII falls in three separate parts. Partone deals with the overall structure and philosophy behind the database, part two accounts in details...... for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the second part of the Annex XVII reporting. Basically, the database bank contains three categories of data, i.e. i) high sampled wind...

  7. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    The YH database is a server that allows the user to easily browse and download data from the first Asian diploid genome. The aim of this platform is to facilitate the study of this Asian genome and to enable improved organization and presentation large-scale personal genome data. Powered by GBrowse......, we illustrate here the genome sequences, SNPs, and sequencing reads in the MapView. The relationships between phenotype and genotype can be searched by location, dbSNP ID, HGMD ID, gene symbol and disease name. A BLAST web service is also provided for the purpose of aligning query sequence against YH...... genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  8. Unifying Memory and Database Transactions

    Science.gov (United States)

    Dias, Ricardo J.; Lourenço, João M.

    Software Transactional Memory is a concurrency control technique gaining increasing popularity, as it provides high-level concurrency control constructs and eases the development of highly multi-threaded applications. But this easiness comes at the expense of restricting the operations that can be executed within a memory transaction, and operations such as terminal and file I/O are either not allowed or incur in serious performance penalties. Database I/O is another example of operations that usually are not allowed within a memory transaction. This paper proposes to combine memory and database transactions in a single unified model, benefiting from the ACID properties of the database transactions and from the speed of main memory data processing. The new unified model covers, without differentiating, both memory and database operations. Thus, the users are allowed to freely intertwine memory and database accesses within the same transaction, knowing that the memory and database contents will always remain consistent and that the transaction will atomically abort or commit the operations in both memory and database. This approach allows to increase the granularity of the in-memory atomic actions and hence, simplifies the reasoning about them.

  9. Siamese Instance Search for Tracking

    NARCIS (Netherlands)

    Tao, R.; Gavves, E.; Smeulders, A.W.M.

    2016-01-01

    In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online

  10. Instance-Based Question Answering

    Science.gov (United States)

    2006-12-01

    included factoid questions grouped around a set of target entities. For example, for the target entity “ Franz Kafka ”, associated questions included: “Where...from Franz Schubert. A year later he did on Dec 5th.”). Depending on the task, an answer extractor may identify very small, factoid candidates which...IWP): Paraphrase Ac- quisition and Applications, 2003. [55] A. Ittycheriah, M. Franz , and S. Roukos. Ibm’s statistical question answering system - trec

  11. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  12. Mouse Resource Browser--a database of mouse databases.

    Science.gov (United States)

    Zouberakis, Michael; Chandras, Christina; Swertz, Morris; Smedley, Damian; Gruenberger, Michael; Bard, Jonathan; Schughart, Klaus; Rosenthal, Nadia; Hancock, John M; Schofield, Paul N; Kollias, George; Aidinis, Vassilis

    2010-07-06

    The laboratory mouse has become the organism of choice for discovering gene function and unravelling pathogenetic mechanisms of human diseases through the application of various functional genomic approaches. The resulting deluge of data has led to the deployment of numerous online resources and the concomitant need for formalized experimental descriptions, data standardization, database interoperability and integration, a need that has yet to be met. We present here the Mouse Resource Browser (MRB), a database of mouse databases that indexes 217 publicly available mouse resources under 22 categories and uses a standardised database description framework (the CASIMIR DDF) to provide information on their controlled vocabularies (ontologies and minimum information standards), and technical information on programmatic access and data availability. Focusing on interoperability and integration, MRB offers automatic generation of downloadable and re-distributable SOAP application-programming interfaces for resources that provide direct database access. MRB aims to provide useful information to both bench scientists, who can easily navigate and find all mouse related resources in one place, and bioinformaticians, who will be provided with interoperable resources containing data which can be mined and integrated. Database URL: http://bioit.fleming.gr/mrb.

  13. Parallel BLAST on split databases.

    Science.gov (United States)

    Mathog, David R

    2003-09-22

    BLAST programs often run on large SMP machines where multiple threads can work simultaneously and there is enough memory to cache the databases between program runs. A group of programs is described which allows comparable performance to be achieved with a Beowulf configuration in which no node has enough memory to cache a database but the cluster as an aggregate does. To achieve this result, databases are split into equal sized pieces and stored locally on each node. Each query is run on all nodes in parallel and the resultant BLAST output files from all nodes merged to yield the final output. Source code is available from ftp://saf.bio.caltech.edu/

  14. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  15. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  16. The Danish Intensive Care Database

    DEFF Research Database (Denmark)

    Christiansen, Christian Fynbo; Møller, Morten Hylander; Nielsen, Henrik

    2016-01-01

    AIM OF DATABASE: The aim of this database is to improve the quality of care in Danish intensive care units (ICUs) by monitoring key domains of intensive care and to compare these with predefined standards. STUDY POPULATION: The Danish Intensive Care Database (DID) was established in 2007...... and includes virtually all ICU admissions in Denmark since 2005. The DID obtains data from the Danish National Registry of Patients, with complete follow-up through the Danish Civil Registration System. MAIN VARIABLES: For each ICU admission, the DID includes data on the date and time of ICU admission, type...

  17. Practical database programming with Java

    CERN Document Server

    Bai, Ying

    2011-01-01

    "This important resource offers a detailed description about the practical considerations and applications in database programming using Java NetBeans 6.8 with authentic examples and detailed explanations. This book provides readers with a clear picture as to how to handle the database programming issues in the Java NetBeans environment. The book is ideal for classroom and professional training material. It includes a wealth of supplemental material that is available for download including Powerpoint slides, solution manuals, and sample databases"--

  18. Enforcing Privacy in Cloud Databases

    OpenAIRE

    Moghadam, Somayeh Sobati; Darmont, Jérôme; Gavin, Gérald

    2017-01-01

    International audience; Outsourcing databases, i.e., resorting to Database-as-a-Service (DBaaS), is nowadays a popular choice due to the elasticity, availability, scalability and pay-as-you-go features of cloud computing. However, most data are sensitive to some extent, and data privacy remains one of the top concerns to DBaaS users, for obvious legal and competitive reasons.In this paper, we survey the mechanisms that aim at making databases secure in a cloud environment, and discuss current...

  19. InterAction Database (IADB)

    Science.gov (United States)

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  20. Disaster Debris Recovery Database - Landfills

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Disaster Debris Recovery Database (DDRD) promotes the proper recovery, recycling, and disposal of disaster debris for emergency responders at the federal,...

  1. Disaster Debris Recovery Database - Recovery

    Data.gov (United States)

    U.S. Environmental Protection Agency — The US EPA Disaster Debris Recovery Database (DDRD) promotes the proper recovery, recycling, and disposal of disaster debris for emergency responders at the federal,...

  2. Freshwater Biological Traits Database (Traits)

    Science.gov (United States)

    The traits database was compiled for a project on climate change effects on river and stream ecosystems. The traits data, gathered from multiple sources, focused on information published or otherwise well-documented by trustworthy sources.

  3. JPL Small Body Database Browser

    Data.gov (United States)

    National Aeronautics and Space Administration — The JPL Small-Body Database Browser provides data for all known asteroids and many comets. Newly discovered objects and their orbits are added on a daily basis....

  4. Danish Colorectal Cancer Group Database

    DEFF Research Database (Denmark)

    Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H

    2016-01-01

    AIM OF DATABASE: The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. STUDY POPULATION: All Danish patients with newly diagnosed colorectal cancer who are either diagnosed......, and other pathological risk factors. DESCRIPTIVE DATA: The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal...... diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long...

  5. The Danish Inguinal Hernia database

    DEFF Research Database (Denmark)

    Friis-Andersen, Hans; Bisgaard, Thue

    2016-01-01

    AIM OF DATABASE: To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. STUDY POPULATION: Patients ≥18 years operated for groin hernia. MAIN VARIABLES: Type and size...... of hernia, primary or recurrent, type of surgical repair procedure, mesh and mesh fixation methods. DESCRIPTIVE DATA: According to the Danish National Health Act, surgeons are obliged to register all hernia repairs immediately after surgery (3 minute registration time). All institutions have continuous...... the medical management of the database. RESULTS: The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015). A total of 49 peer-reviewed national and international publications have been published from the database (June 2015). CONCLUSION: The Danish Inguinal Hernia...

  6. Biological Sample Monitoring Database (BSMDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Biological Sample Monitoring Database System (BSMDBS) was developed for the Northeast Fisheries Regional Office and Science Center (NER/NEFSC) to record and...

  7. Consolidated Human Activities Database (CHAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Consolidated Human Activity Database (CHAD) contains data obtained from human activity studies that were collected at city, state, and national levels. CHAD is...

  8. Air Compliance Complaint Database (ACCD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Air Compliance Complaint Database (ACCD) which logs all air pollution complaints...

  9. E3 Portfolio Review Database

    Data.gov (United States)

    US Agency for International Development — The E3 Portfolio Review Database houses operational and performance data for all activities that the Bureau funds and/or manages. Activity-level data is collected by...

  10. Great Lakes Environmental Database (GLENDA)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Great Lakes Environmental Database (GLENDA) houses environmental data on a wide variety of constituents in water, biota, sediment, and air in the Great Lakes area.

  11. Tidal Creek Sentinel Habitat Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ecological Research, Assessment and Prediction's Tidal Creeks: Sentinel Habitat Database was developed to support the National Oceanic and Atmospheric...

  12. Drinking Water Treatability Database (TDB)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities,...

  13. Database of Interacting Proteins (DIP)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent...

  14. Geomagnetic Observatory Database February 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA National Centers for Environmental Information (formerly National Geophysical Data Center) maintains an active database of worldwide geomagnetic observatory...

  15. National Benthic Infaunal Database (NBID)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NBID is a quantitative database on abundances of individual benthic species by sample and study region, along with other synoptically measured environmental...

  16. Human Exposure Database System (HEDS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Human Exposure Database System (HEDS) provides public access to data sets, documents, and metadata from EPA on human exposure. It is primarily intended for...

  17. National Patient Care Database (NPCD)

    Data.gov (United States)

    Department of Veterans Affairs — The National Patient Care Database (NPCD), located at the Austin Information Technology Center, is part of the National Medical Information Systems (NMIS). The NPCD...

  18. Relational Multimodal Freight Database Webinar

    Science.gov (United States)

    2012-02-01

    The relational Multimodal Freight Database (MFD) was developed as part of Texas Department : of Transportation (TxDOT) Research Project 0-6297 entitled Freight Planning Factors Impacting : Texas Commodity Flows, conducted by the Center for Transporta...

  19. A comparison of different database technologies for the CMS AsyncStageOut transfer database

    Science.gov (United States)

    Ciangottini, D.; Balcas, J.; Mascheroni, M.; Rupeika, E. A.; Vaandering, E.; Riahi, H.; Silva, J. M. D.; Hernandez, J. M.; Belforte, S.; Ivanov, T. T.

    2017-10-01

    AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.

  20. A Comparison of Different Database Technologies for the CMS AsyncStageOut Transfer Database

    Energy Technology Data Exchange (ETDEWEB)

    Ciangottini, D. [INFN, Perugia; Balcas, J. [Caltech; Mascheroni, M. [Fermilab; Rupeika, E. A. [Vilnius U.; Vaandering, E. [Fermilab; Riahi, H. [CERN; Silva, J. M.D. [Sao Paulo, IFT; Hernandez, J. M. [Madrid, CIEMAT; Belforte, S. [INFN, Trieste; Ivanov, T. T. [Sofiya U.

    2017-11-22

    AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user’s output data to a single remote site was part of the job execution, resulting in inefficient use of job slots and an unacceptable failure rate. Currently ASO manages up to 600k files of various sizes per day from more than 500 users per month, spread over more than 100 sites. ASO uses a NoSQL database (CouchDB) as internal bookkeeping and as way to communicate with other CRAB components. Since ASO/CRAB were put in production in 2014, the number of transfers constantly increased up to a point where the pressure to the central CouchDB instance became critical, creating new challenges for the system scalability, performance, and monitoring. This forced a re-engineering of the ASO application to increase its scalability and lowering its operational effort. In this contribution we present a comparison of the performance of the current NoSQL implementation and a new SQL implementation, and how their different strengths and features influenced the design choices and operational experience. We also discuss other architectural changes introduced in the system to handle the increasing load and latency in delivering output to the user.

  1. A Call to Arms: Revisiting Database Design

    OpenAIRE

    Badia, Antonio; Lemire, Daniel

    2011-01-01

    Good database design is crucial to obtain a sound, consistent database, and - in turn - good database design methodologies are the best way to achieve the right design. These methodologies are taught to most Computer Science undergraduates, as part of any Introduction to Database class. They can be considered part of the "canon", and indeed, the overall approach to database design has been unchanged for years. Moreover, none of the major database research assessments identify database design ...

  2. GOBASE: an organelle genome database

    OpenAIRE

    O?Brien, Emmet A.; Zhang, Yue; Wang, Eric; Marie, Veronique; Badejoko, Wole; Lang, B. Franz; Burger, Gertraud

    2008-01-01

    The organelle genome database GOBASE, now in its 21st release (June 2008), contains all published mitochondrion-encoded sequences (?913 000) and chloroplast-encoded sequences (?250 000) from a wide range of eukaryotic taxa. For all sequences, information on related genes, exons, introns, gene products and taxonomy is available, as well as selected genome maps and RNA secondary structures. Recent major enhancements to database functionality include: (i) addition of an interface for RNA editing...

  3. Small Business Innovations (Integrated Database)

    Science.gov (United States)

    1992-01-01

    Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.

  4. Database Reports Over the Internet

    Science.gov (United States)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  5. Clinical Databases for Chest Physicians.

    Science.gov (United States)

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  6. A framework for cross-observatory volcanological database management

    Science.gov (United States)

    Aliotta, Marco Antonio; Amore, Mauro; Cannavò, Flavio; Cassisi, Carmelo; D'Agostino, Marcello; Dolce, Mario; Mastrolia, Andrea; Mangiagli, Salvatore; Messina, Giuseppe; Montalto, Placido; Fabio Pisciotta, Antonino; Prestifilippo, Michele; Rossi, Massimo; Scarpato, Giovanni; Torrisi, Orazio

    2017-04-01

    In the last years, it has been clearly shown how the multiparametric approach is the winning strategy to investigate the complex dynamics of the volcanic systems. This involves the use of different sensor networks, each one dedicated to the acquisition of particular data useful for research and monitoring. The increasing interest devoted to the study of volcanological phenomena led the constitution of different research organizations or observatories, also relative to the same volcanoes, which acquire large amounts of data from sensor networks for the multiparametric monitoring. At INGV we developed a framework, hereinafter called TSDSystem (Time Series Database System), which allows to acquire data streams from several geophysical and geochemical permanent sensor networks (also represented by different data sources such as ASCII, ODBC, URL etc.), located on the main volcanic areas of Southern Italy, and relate them within a relational database management system. Furthermore, spatial data related to different dataset are managed using a GIS module for sharing and visualization purpose. The standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common space and time scale. In order to share data between INGV observatories, and also with Civil Protection, whose activity is related on the same volcanic districts, we designed a "Master View" system that, starting from the implementation of a number of instances of the TSDSystem framework (one for each observatory), makes possible the joint interrogation of data, both temporal and spatial, on instances located in different observatories, through the use of web services technology (RESTful, SOAP). Similarly, it provides metadata for equipment using standard schemas (such as FDSN StationXML). The "Master View" is also responsible for managing the data policy through a "who owns what" system, which allows you to associate viewing/download of

  7. A New Optimized Model to Handle Temporal Data using Open Source Database

    Directory of Open Access Journals (Sweden)

    KUMAR, S.

    2017-05-01

    Full Text Available The majority of the database applications now a days deal with temporal data. Temporal records are known to change during the course of time and facilities to manage the multiple snapshots of these records are generally missing in conventional databases. Consequently, different temporal data models have been proposed and implemented as an extension of the temporal less database systems. In the single relation model, the present and past instances are stored in a single relation that makes its handling cumbersome and inefficient. This paper emphasize upon storing the past instances of the records in the multiple historical relations. The current relations will manage the recent snapshot of data. The tuple time stamping approach is used to timestamp the temporal records. This paper proposes a temporal model for the management of time varying data built on the top of conventional open source database. Indexing is used to enhance the performance of the model. The proposed model is also compared with the single relation model.

  8. Naval Ship Database: Database Design, Implementation, and Schema

    Science.gov (United States)

    2013-09-01

    Technical Note documents a database that was designed and implemented to manage a collection of historical data housed by the Maritime Operational...d’attache désignés, géolocalisation, statuts opérationnels, changements de fuseau horaire de référence, données météorologiques observées, etc. Nous...database has been recognized as a beneficial step forward in its data management , and will simplify efforts for ongoing efforts within MORT as it

  9. REDIdb: the RNA editing database.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  10. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  11. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  12. Implementasi Database Auditing dengan Memanfaatkan Sinkronisasi DBMS

    Directory of Open Access Journals (Sweden)

    Gede Anantaswarya Abhisena

    2017-08-01

    Full Text Available Database auditing dapat menjadi komponen penting dalam keamanan basis data dan kepatuhan terhadap peraturan pemerintah. Database Administrator perlu lebih waspada dalam teknik yang digunakan untuk melindungi data perusahaan, serta memantau dan memastikan bahwa perlindungan yang memadai terhadap data tersedia. Pada tingkat tinggi, database auditing merupakan fasilitas untuk melacak otoritas dan penggunaan sumber daya database. Ketika fungsi auditing diaktifkan, setiap operasi database yang diaudit menghasilkan jejak audit dari perubahan informasi yang dilakukan. Sinkronisasi database adalah bentuk dari replikasi, yang merupakan proses untuk memastikan setiap salinan data pada database berisi objek dan data yang serupa. Sinkronisasi database dapat dimanfaatkan dalam berbagai keperluan, salah satunya membangun auditing untuk mencatat setiap aktivitas yang terjadi pada database. Jejak audit dari operasi database yang dihasilkan, memungkinkan DBA (Database Administrator memelihara audit trails dari waktu ke waktu, untuk melakukan analisis tentang pola akses dan modifikasi terhadap data pada DBMS (Database Management System.

  13. Reproducibility, sharing and progress in nanomaterial databases

    Science.gov (United States)

    Tropsha, Alexander; Mills, Karmann C.; Hickey, Anthony J.

    2017-12-01

    Publicly accessible databases are core resources for data-rich research, consolidating field-specific knowledge and highlighting best practices and challenges. Further effective growth of nanomaterial databases requires the concerted efforts of database stewards, researchers, funding agencies and publishers.

  14. ADASS Web Database XML Project

    Science.gov (United States)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  15. The Danish Prostate Cancer Database

    DEFF Research Database (Denmark)

    Nguyen-Nielsen, Mary; Høyer, Søren; Friis, Søren

    2016-01-01

    AIM OF DATABASE: The Danish Prostate Cancer Database (DAPROCAdata) is a nationwide clinical cancer database that has prospectively collected data on patients with incident prostate cancer in Denmark since February 2010. The overall aim of the DAPROCAdata is to improve the quality of prostate cancer...... care in Denmark by systematically collecting key clinical variables for the purposes of health care monitoring, quality improvement, and research. STUDY POPULATION: All Danish patients with histologically verified prostate cancer are included in the DAPROCAdata. MAIN VARIABLES: The DAPROCAdata...... registers clinical data and selected characteristics for patients with prostate cancer at diagnosis. Data are collected from the linkage of nationwide health registries and supplemented with online registration of key clinical variables by treating physicians at urological and oncological departments. Main...

  16. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1983-01-01

    Database management techniques are applied to automating the setup of operating parameters of a heavy-ion accelerator used in nuclear physics experiments. Data files consist of ion-beam attributes, the interconnection assignments of the numerous power supplies and magnetic elements that steer the ions' path through the system, the data values that represent the electrical currents supplied by the power supplies, as well as the positions of motors and status of mechanical actuators. The database is relational and permits searching on ranges of any subset of the ion-beam attributes. A file selected from the database is used by the control software to replicate the ion beam conditions by adjusting the physical elements in a continuous manner

  17. The Danish Smoking Cessation Database

    DEFF Research Database (Denmark)

    Rasmussen, Mette; Tønnesen, Hanne

    2016-01-01

    Background: The Danish Smoking Cessation Database (SCDB) was established in 2001 as the first national healthcare register within the field of health promotion. Aim of the database: The aim of the SCDB is to document and evaluate smoking cessation (SC) interventions to assess and improve......, and prognostic factors. The outcome data are smoking status at the end of the programme and after six months and satisfaction with the SC intervention. Validity: Approximately 80-90% of all SC clinics offering systematic face-to-face SC interventions are reporting data to the SCDB. The data completeness...... of the SCDB is very high, at 95-100%. Validation checks have been implemented to ensure high data quality. Conclusion: The SCDB is a well-established clinical database and a priceless tool for monitoring and improving SC interventions in Denmark to identify the best solution to helping smokers become smoke...

  18. Database Description - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available to* Creator Affiliation: Computational Biology Research Center (CBRC) Journal Search: Creator Name: Kazuhiko... Fukui Creator Affiliation: Computational Biology Research Center (CBRC) Journal ...Search: Creator Name: Hiroyuki Toh Creator Affiliation: Computational Biology Research Center (CBRC) Journal...tein coupled Receptor Interaction Partners DataBase. Author name(s): Nemoto W, Fukui K, Toh H. Journal: J Re

  19. Database Cracking: Towards Auto-tuning Database Kernels

    NARCIS (Netherlands)

    S. Idreos (Stratos)

    2010-01-01

    textabstractIndices are heavily used in database systems in order to achieve the ultimate query processing performance. It takes a lot of time to create an index and the system needs to reserve extra storage space to store the auxiliary data structure. When updates arrive, there is also the

  20. Database cracking: towards auto-tunning database kernels

    NARCIS (Netherlands)

    Ydraios, E.

    2010-01-01

    Indices are heavily used in database systems in order to achieve the ultimate query processing performance. It takes a lot of time to create an index and the system needs to reserve extra storage space to store the auxiliary data structure. When updates arrive, there is also the overhead of

  1. The CATDAT damaging earthquakes database

    Directory of Open Access Journals (Sweden)

    J. E. Daniell

    2011-08-01

    Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.

    Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.

    Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.

    Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.

    This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global

  2. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  3. The CATDAT damaging earthquakes database

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  4. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  5. Digitizing Olin Eggen's Card Database

    Science.gov (United States)

    Crast, J.; Silvis, G.

    2017-06-01

    The goal of the Eggen Card Database Project is to recover as many of the photometric observations from Olin Eggen's Card Database as possible and preserve these observations, in digital forms that are accessible by anyone. Any observations of interest to the AAVSO will be added to the AAVSO International Database (AID). Given to the AAVSO on long-term loan by the Cerro Tololo Inter-American Observatory, the database is a collection of over 78,000 index cards holding all Eggen's observations made between 1960 and 1990. The cards were electronically scanned and the resulting 108,000 card images have been published as a series of 2,216 PDF files, which are available from the AAVSO web site. The same images are also stored in an AAVSO online database where they are indexed by star name and card content. These images can be viewed using the eggen card portal online tool. Eggen made observations using filter bands from five different photometric systems. He documented these observations using 15 different data recording formats. Each format represents a combination of filter magnitudes and color indexes. These observations are being transcribed onto spreadsheets, from which observations of value to the AAVSO are added to the AID. A total of 506 U, B, V, R, and I observations were added to the AID for the variable stars S Car and l Car. We would like the reader to search through the card database using the eggen card portal for stars of particular interest. If such stars are found and retrieval of the observations is desired, e-mail the authors, and we will be happy to help retrieve those data for the reader.

  6. Danish Colorectal Cancer Group Database.

    Science.gov (United States)

    Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H

    2016-01-01

    The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001-2003 to database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long

  7. Legume and Lotus japonicus Databases

    DEFF Research Database (Denmark)

    Hirakawa, Hideki; Mun, Terry; Sato, Shusei

    2014-01-01

    Since the genome sequence of Lotus japonicus, a model plant of family Fabaceae, was determined in 2008 (Sato et al. 2008), the genomes of other members of the Fabaceae family, soybean (Glycine max) (Schmutz et al. 2010) and Medicago truncatula (Young et al. 2011), have been sequenced. In this sec....... In this section, we introduce representative, publicly accessible online resources related to plant materials, integrated databases containing legume genome information, and databases for genome sequence and derived marker information of legume species including L. japonicus...

  8. The RERF dosimetry measurements database

    International Nuclear Information System (INIS)

    Cullings, Harry M.; Fujita, Shoichiro; Preston, Dale L.; Grant, Eric J.; Shizuma, Kiyoshi; Hoshi, Masaharu; Maruyama, Takashi; Lowder, Wayne M.

    2005-01-01

    The Radiation Effects Research Foundation maintains a database containing detailed information on every known measurement of environmental materials in the cities of Hiroshima and Nagasaki for gamma-ray thermoluminescence or neutron activation produced by incident radiation from the atomic bomb detonations. The intent was to create a single information resource that would consistently document, as completely as possible in each case, a standard array of data for every known measurement. This database provides a uniquely comprehensive and carefully designed reference for the dosimetry reassessment. (J.P.N.)

  9. The STRING database in 2017

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Morris, John H; Cook, Helen

    2017-01-01

    A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number of organi......A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number...

  10. Q-bank phytoplasma database

    DEFF Research Database (Denmark)

    Contaldo, Nicoletta; Bertaccini, Assunta; Nicolaisen, Mogens

    2014-01-01

    The setting of the Q-Bank database free available on line for quarantine phytoplasma and also for general phytoplasma identification is described. The tool was developed in the frame of the EU-FP7 project Qbol and is linked with a new project Q-collect in order to made widely available the identi......The setting of the Q-Bank database free available on line for quarantine phytoplasma and also for general phytoplasma identification is described. The tool was developed in the frame of the EU-FP7 project Qbol and is linked with a new project Q-collect in order to made widely available...

  11. A Sandia telephone database system

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.; Tolendino, L.F.

    1991-08-01

    Sandia National Laboratories, Albuquerque, may soon have more responsibility for the operation of its own telephone system. The processes that constitute providing telephone service can all be improved through the use of a central data information system. We studied these processes, determined the requirements for a database system, then designed the first stages of a system that meets our needs for work order handling, trouble reporting, and ISDN hardware assignments. The design was based on an extensive set of applications that have been used for five years to manage the Sandia secure data network. The system utilizes an Ingres database management system and is programmed using the Application-By-Forms tools.

  12. CD-ROM-aided Databases

    Science.gov (United States)

    Masuyama, Keiichi

    CD-ROM has rapidly evolved as a new information medium with large capacity, In the U.S. it is predicted that it will become two hundred billion yen market in three years, and thus CD-ROM is strategic target of database industry. Here in Japan the movement toward its commercialization has been active since this year. Shall CD-ROM bussiness ever conquer information market as an on-disk database or electronic publication? Referring to some cases of the applications in the U.S. the author views marketability and the future trend of this new optical disk medium.

  13. A Novel Multiple-Instance Learning-Based Approach to Computer-Aided Detection of Tuberculosis on Chest X-Rays

    NARCIS (Netherlands)

    Melendez Rodriguez, J.C.; Ginneken, B. van; Maduskar, P.; Philipsen, R.H.H.M.; Reither, K.; Breuninger, M.; Adetifa, I.M.O.; Maane, R.; Ayles, H.; Sanchez, C.I.

    2015-01-01

    In order to reach performance levels comparable to those of human experts, computer-aided detection (CAD) systems are typically optimized by means of a supervised learning approach that relies on large training databases comprising manually annotated lesions. However, manually outlining those

  14. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  15. ASTEROID POLARIMETRIC DATABASE V3.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Asteroid Polarimetric Database (APD) is compiled by Dmitrij Lupishko of Kharkov University. This database tabulates the original data comprising degree of...

  16. ASTEROID POLARIMETRIC DATABASE V4.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Asteroid Polarimetric Database (APD) is compiled by Dmitrij Lupishko of Kharkov University. This database tabulates the original data comprising degree of...

  17. ASTEROID POLARIMETRIC DATABASE V2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Asteroid Polarimetric Database (APD) is compiled by Dmitrij Lupishko of Kharkov University. This database tabulates the original data comprising degree of...

  18. ASTEROID POLARIMETRIC DATABASE V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The Asteroid Polarimetric Database (APD) is compiled by Dmitrij Lupishko of Kharkov University. This database tabulates the original data comprising degree of...

  19. Update History of This Database - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RED Update History of This Database Date Update contents 2015/12/21 Rice Expression Database...pened. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RED | LSDB Archive ...

  20. Interactive bibliographical database on color

    Science.gov (United States)

    Caivano, Jose L.

    2002-06-01

    The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.

  1. Immune epitope database analysis resource

    DEFF Research Database (Denmark)

    Kim, Yohan; Ponomarenko, Julia; Zhu, Zhanyang

    2012-01-01

    The immune epitope database analysis resource (IEDB-AR: http://tools.iedb.org) is a collection of tools for prediction and analysis of molecular targets of T- and B-cell immune responses (i.e. epitopes). Since its last publication in the NAR webserver issue in 2008, a new generation of peptide...

  2. Review of natural product databases.

    Science.gov (United States)

    Xie, Tao; Song, Sicheng; Li, Sijia; Ouyang, Liang; Xia, Lin; Huang, Jian

    2015-08-01

    Many natural products have pharmacological or biological activities that can be of therapeutic benefit in treating diseases, and are also an important source of inspiration for development of potential novel drugs. The past few decades have witnessed extensive study of natural products for their promising prospects in application of medicinal chemistry, molecular biology and pharmaceutical sciences. Natural product databases have provided systematic collection of information concerning natural products and their derivatives, including structure, source and mechanisms of action, which significantly support modern drug discovery. Currently, a considerable number of natural product databases, such as TCM Database@Taiwan, TCMID, CEMTDD, SuperToxic and SuperNatural, have been developed, providing data such as integrated medicinal herbs, ingredients, 2D/3D structures of the compounds, related target proteins, relevant diseases, and metabolic toxicity and more. We focus on an analytical overview of current natural product databases, and further discuss the good, bad or imperfection of current ones, in the hope of better integrating existing relevant outcomes, thus providing new routes for future drug discovery. © 2015 John Wiley & Sons Ltd.

  3. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out, ...

  4. The COMPADRE Plant Matrix Database

    DEFF Research Database (Denmark)

    2014-01-01

    COMPADRE contains demographic information on hundreds of plant species. The data in COMPADRE are in the form of matrix population models and our goal is to make these publicly available to facilitate their use for research and teaching purposes. COMPADRE is an open-access database. We only request...

  5. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters

  6. Protein-Protein Interaction Databases

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Jensen, Lars Juhl

    2015-01-01

    of research are explored. Here we present an overview of the most widely used protein-protein interaction databases and the methods they employ to gather, combine, and predict interactions. We also point out the trade-off between comprehensiveness and accuracy and the main pitfall scientists have to be aware...

  7. LMSD: LIPID MAPS structure database

    Science.gov (United States)

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Brown, Alex; Dennis, Edward A.; Glass, Christopher K.; Merrill, Alfred H.; Murphy, Robert C.; Raetz, Christian R. H.; Russell, David W.; Subramaniam, Shankar

    2007-01-01

    The LIPID MAPS Structure Database (LMSD) is a relational database encompassing structures and annotations of biologically relevant lipids. Structures of lipids in the database come from four sources: (i) LIPID MAPS Consortium's core laboratories and partners; (ii) lipids identified by LIPID MAPS experiments; (iii) computationally generated structures for appropriate lipid classes; (iv) biologically relevant lipids manually curated from LIPID BANK, LIPIDAT and other public sources. All the lipid structures in LMSD are drawn in a consistent fashion. In addition to a classification-based retrieval of lipids, users can search LMSD using either text-based or structure-based search options. The text-based search implementation supports data retrieval by any combination of these data fields: LIPID MAPS ID, systematic or common name, mass, formula, category, main class, and subclass data fields. The structure-based search, in conjunction with optional data fields, provides the capability to perform a substructure search or exact match for the structure drawn by the user. Search results, in addition to structure and annotations, also include relevant links to external databases. The LMSD is publicly available at PMID:17098933

  8. The NASA Fireball Network Database

    Science.gov (United States)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  9. Thermodynamic Database for Zirconium Alloys

    International Nuclear Information System (INIS)

    Jerlerud Perez, Rosa

    2003-05-01

    For many decades zirconium alloys have been commonly used in the nuclear power industry as fuel cladding material. Besides their good corrosion resistance and acceptable mechanical properties the main reason of using these alloys is the low neutron absorption. Zirconium alloys are exposed to a very severe environment during the nuclear fission process and there is a demand for better design of this material. To meet this requirement a thermodynamic database is developed to support material designers. In this thesis some aspects about the development of a thermodynamic database for zirconium alloys are presented. A thermodynamic database represents an important facility in applying thermodynamic equilibrium calculations for a given material providing: 1) relevant information about the thermodynamic properties of the alloys e.g. enthalpies, activities, heat capacity, and 2) significant information for the manufacturing process e.g. heat treatment temperature. The basic information in the database is first the unary data, i.e. pure elements; those are taken from the compilation of the Scientific Group Thermodata Europe (SGTE) and then the binary and ternary systems. All phases present in those binary and ternary systems are described by means of the Gibbs energy dependence on composition and temperature. Many of those binary systems have been taken from published or unpublished works and others have been assessed in the present work. All the calculations have been made using Thermo C alc software and the representation of the Gibbs energy obtained by applying Calphad technique

  10. A Database Management Assessment Instrument

    Science.gov (United States)

    Landry, Jeffrey P.; Pardue, J. Harold; Daigle, Roy; Longenecker, Herbert E., Jr.

    2013-01-01

    This paper describes an instrument designed for assessing learning outcomes in data management. In addition to assessment of student learning and ABET outcomes, we have also found the instrument to be effective for determining database placement of incoming information systems (IS) graduate students. Each of these three uses is discussed in this…

  11. The Danish Cardiac Rehabilitation Database.

    Science.gov (United States)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.

  12. Database on wind characteristics - Contents of database bank

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    The main objective of IEA R&D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind fielddata (time series and resource data) observed...... in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international windturbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands...... and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in detailsfor the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving...

  13. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  14. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  15. WMC Database Evaluation. Case Study Report

    Energy Technology Data Exchange (ETDEWEB)

    Palounek, Andrea P. T [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-29

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  16. Brede Tools and Federating Online Neuroinformatics Databases

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible...... interconnections on different levels between the Brede tools and other databases....

  17. Online Petroleum Industry Bibliographic Databases: A Review.

    Science.gov (United States)

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  18. Migration Between NoSQL Databases

    OpenAIRE

    Opačak, Damir

    2013-01-01

    The thesis discusses the differences and, consequently, potential problems that may arise when migrating between different types of NoSQL databases. The first chapters introduce the reader to the issues of relational databases and present the beginnings of NoSQL databases. The following chapters present different types of NoSQL databases and some of their representatives with the aim to show specific features of NoSQL databases and the fact that each of them was developed to solve specifi...

  19. Computer aided design of database relation schemes

    OpenAIRE

    Brehovský, Martin

    2006-01-01

    Project "Computer aided design of database relation schemes" shows the importance and problems with designing nowadays Database schemas and gives ideas on possible solutions to these problems. Its main purpose is to describe a solution on optimizing Database designing that is transforming database schemas into tables. These tables should also fulfill the requirements based on the need for data consistency and database maintenance. Main part of this project is an application, which implements ...

  20. NLTE4 Plasma Population Kinetics Database

    Science.gov (United States)

    SRD 159 NLTE4 Plasma Population Kinetics Database (Web database for purchase)   This database contains benchmark results for simulation of plasma population kinetics and emission spectra. The data were contributed by the participants of the 4th Non-LTE Code Comparison Workshop who have unrestricted access to the database. The only limitation for other users is in hidden labeling of the output results. Guest users can proceed to the database entry page without entering userid and password.

  1. Multidimensional Databases and Data Warehousing

    DEFF Research Database (Denmark)

    Jensen, Christian S.; Pedersen, Torben Bach; Thomsen, Christian

    The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes...... data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases. The book also covers advanced multidimensional concepts that are considered...... techniques that are particularly important to multidimensional databases, including materialized views, bitmap indices, join indices, and star join processing. The book ends with a chapter that presents the literature on which the book is based and offers further readings for those readers who wish to engage...

  2. The National Land Cover Database

    Science.gov (United States)

    Homer, Collin H.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  3. Online Databases for Health Professionals

    OpenAIRE

    Marshall, Joanne Gard

    1987-01-01

    Recent trends in the marketing of electronic information technology have increased interest among health professionals in obtaining direct access to online biomedical databases such as Medline. During 1985, the Canadian Medical Association (CMA) and Telecom Canada conducted an eight-month trial of the use made of online information retrieval systems by 23 practising physicians and one pharmacist. The results of this project demonstrated both the value and the limitations of these systems in p...

  4. The Danish Intensive Care Database

    Directory of Open Access Journals (Sweden)

    Christiansen CF

    2016-10-01

    Full Text Available Christian Fynbo Christiansen,1 Morten Hylander Møller,2 Henrik Nielsen,1 Steffen Christensen3 1Department of Clinical Epidemiology, Institute of Clinical Medicine, Aarhus University Hospital, Aarhus, 2Department of Intensive Care 4131, Copenhagen University Hospital Rigshospitalet, Copenhagen, 3Department of Intensive Care, Aarhus University Hospital, Aarhus, Denmark Aim of database: The aim of this database is to improve the quality of care in Danish intensive care units (ICUs by monitoring key domains of intensive care and to compare these with predefined standards. Study population: The Danish Intensive Care Database (DID was established in 2007 and includes virtually all ICU admissions in Denmark since 2005. The DID obtains data from the Danish National Registry of Patients, with complete follow-up through the Danish Civil Registration System. Main variables: For each ICU admission, the DID includes data on the date and time of ICU admission, type of admission, organ supportive treatments, date and time of discharge, status at discharge, and mortality up to 90 days after admission. Descriptive variables include age, sex, Charlson comorbidity index score, and, since 2010, the Simplified Acute Physiology Score (SAPS II. The variables are recorded with 90%–100% completeness in the recent years, except for SAPS II score, which is 73%–76% complete. The DID currently includes five quality indicators. Process indicators include out-of-hour discharge and transfer to other ICUs for capacity reasons. Outcome indicators include ICU readmission within 48 hours and standardized mortality ratios for death within 30 days after admission using case-mix adjustment (initially using age, sex, and comorbidity level, and, since 2013, using SAPS II for all patients and for patients with septic shock. Descriptive data: The DID currently includes 335,564 ICU admissions during 2005–2015 (average 31,958 ICU admissions per year. Conclusion: The DID provides a

  5. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  6. ABCdb: an ABC transporter database.

    Science.gov (United States)

    Quentin, Y; Fichant, G

    2000-10-01

    We present the first release of a database devoted to the ATP-binding cassette (ABC) protein domains (ABCdb). The ABC proteins are involved in a wide variety of physiological processes in Archea, Bacteria and Eucaryota where they are encoded by large families of paralogous genes. The majority of ABC domains energize the transport of compounds across the membranes. In bacteria, ABC transporters are involved in the uptake of a wide range of molecules and in mechanisms of virulence and antibiotic resistance. In eukaryotes, most of them are involved in drug resistance and in human cells, many are associated with diseases. Sequence analysis reveals that members of the ABC superfamily can be organized into sub-families and suggests that they have diverged from common ancestral forms. In this release, ABCdb includes the inventory and assembly of the ABC transporter systems of completely sequenced genomes. In addition to the protein entries, the database comprises information on functional domains, sequence motifs, predicted trans-membrane segments, and signal peptides. It also includes a classification in sub-families of the ABC systems as well as a classification of the different partners of the systems. Evolutionary trees and specific sequence patterns are provided for each sub-family. The database is endowed with a powerful query system and it was interfaced with blastP2 program for similarity searches. ABCdb has been developed in the ACeDB format, a database system developed by Jean Thierry-Mieg and Richard Durbin. ABCdb can be accessed via the World Wide Web (http://ir2lcb.cnrs-mrs.fr/ABCdb/).

  7. Datamining on distributed medical databases

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak

    2004-01-01

    This Ph.D. thesis focuses on clustering techniques for Knowledge Discovery in Databases. Various data mining tasks relevant for medical applications are described and discussed. A general framework which combines data projection and data mining and interpretation is presented. An overview...... of various data projection techniques is offered with the main stress on applied Principal Component Analysis. For clustering purposes, various Generalized Gaussian Mixture models are presented. Further the aggregated Markov model, which provides the cluster structure via the probabilistic decomposition...

  8. Central Asia Active Fault Database

    Science.gov (United States)

    Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

    2014-05-01

    The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late

  9. The CMS Condition Database system

    CERN Document Server

    Govi, Giacomo Maria; Ojeda-Sandonis, Miguel; Pfeiffer, Andreas; Sipos, Roland

    2015-01-01

    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put an effort in the re-design of the Condition Database system, with the aim to improve the scalability and the operability for the data taking starting in 2015. The re-design has focused in simplifying the architecture, using the lessons learned during the operation of the previous data-taking period. In the new system the relational features of the database schema are mainly exploited to handle the metadata ( Tag and Interval of Validity ), allowing for a limited and controlled set of queries. The bulk condition data ( Payloads ) are stored as unstructured binary data, allowing the storage in a single table with a common layout for all of the condition data types. In this presentation, we describe the full architecture of the system, including the serv...

  10. Development a GIS Snowstorm Database

    Science.gov (United States)

    Squires, M. F.

    2010-12-01

    This paper describes the development of a GIS Snowstorm Database (GSDB) at NOAA’s National Climatic Data Center. The snowstorm database is a collection of GIS layers and tabular information for 471 snowstorms between 1900 and 2010. Each snowstorm has undergone automated and manual quality control. The beginning and ending date of each snowstorm is specified. The original purpose of this data was to serve as input for NCDC’s new Regional Snowfall Impact Scale (ReSIS). However, this data is being preserved and used to investigate the impacts of snowstorms on society. GSDB is used to summarize the impact of snowstorms on transportation (interstates) and various classes of facilities (roads, schools, hospitals, etc.). GSDB can also be linked to other sources of impacts such as insurance loss information and Storm Data. Thus the snowstorm database is suited for many different types of users including the general public, decision makers, and researchers. This paper summarizes quality control issues associated with using snowfall data, methods used to identify the starting and ending dates of a storm, and examples of the tables that combine snowfall and societal data.

  11. Summary of earthquake experience database

    International Nuclear Information System (INIS)

    1999-01-01

    Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes

  12. Oil and gas field database

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young In; Han, Jung Kuy [Korea Institute of Geology Mining and Materials, Taejon (Korea)

    1998-12-01

    As agreed by the Second Meeting of the Expert Group of Minerals and Energy Exploration and Development in Seoul, Korea, 'The Construction of Database on the Oil and Gas Fields in the APEC Region' is now under way as a GEMEED database project for 1998. This project is supported by Korean government funds and the cooperation of GEMEED colleagues and experts. During this year, we have constructed the home page menu (topics) and added the data items on the oil and gas field. These items include name of field, discovery year, depth, the number of wells, average production (b/d), cumulative production, and API gravity. The web site shows the total number of oil and gas fields in the APEC region is 47,201. The number of oil and gas fields by member economics are shown in the table. World oil and gas statistics including reserve, production consumption, and trade information were added to the database for the users convenience. (author). 13 refs., tabs., figs.

  13. The Danish Bladder Cancer Database

    DEFF Research Database (Denmark)

    Hansen, Erik; Larsson, Heidi Jeanet; Nørgaard, Mette

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Bladder Cancer Database (DaBlaCa-data) is to monitor the treatment of all patients diagnosed with invasive bladder cancer (BC) in Denmark. STUDY POPULATION: All patients diagnosed with BC in Denmark from 2012 onward were included in the study. Results......-intended radiation therapy. DESCRIPTIVE DATA: One-year mortality was 28% (95% confidence interval [CI]: 15-21). One-year cancer-specific mortality was 25% (95% CI: 22-27%). One-year mortality after cystectomy was 14% (95% CI: 10-18). Ninety-day mortality after cystectomy was 3% (95% CI: 1-5) in 2013. One......-year mortality following curative-intended radiation therapy was 32% (95% CI: 24-39) and 1-year cancer-specific mortality was 23% (95% CI: 16-31) in 2013. CONCLUSION: This preliminary DaBlaCa-data report showed that the treatment of MIBC in Denmark overall meet high international academic standards. The database...

  14. Resolving Questioned Paternity Issues Using a Philippine Genetic Database

    Directory of Open Access Journals (Sweden)

    Maria Corazon De Ungria

    2002-06-01

    Full Text Available The utility of the Philippines genetic database consisting of seven Short Tandem Repeat (STR markers for testing of ten questioned paternity cases was investigated. The markers used were HUMvWA, HUMTH01, HUMCSF1PO, HUMFOLP23, D8S306, HUMFES/FPS, and HUMF13A01. These markers had a combined Power of Paternity Exclusion of 99.17%. Due to the gravity of some cases handled in the laboratory, routine procedures must be assessed to determine the capacity of the analysis to exclude a non-father of predict paternity. Clients showed a preference for only testing father and child to lower costs and reduce conflicts, particularly when the mother objects to the conduct of DNA tests, or when she is deceased or cannot be located. The Probability of Paternity was calculated with and without the mother’s profile in each of the cases. In all instances, results were more informative when the mother’s DNA profile was included. Moreover, variations in the allelic distribution of five STR markers among eight Caucasian, one African-American, and two Amerindian (Argentina populations resulted in significant differences in Probability of Paternity estimates compared to those calculated using the Philippine Database.Based on the results of the present study, it is recommended that tests on alleged father-child samples be performed to screen for at least two mismatches. In the absence of theses mismatches, further analysis that includes the mother’s DNA profile is recommended. Moreover, it is recommended that a Philippines genetic database be used for DNA-based paternity testing in the Philippines.

  15. ISC-EHB: Reinventing the EHB Earthquake Database

    Science.gov (United States)

    Engdahl, E. R.; Weston, J. M.; Harris, J.; Di Giacomo, D.; Storchak, D. A.

    2016-12-01

    The EHB database, originally developed with procedures described by Engdahl, Van der Hilst & Buland (1998), currently ends in 2008. The aim is to expand and recreate the EHB database, in collaboration with the International Seismological Centre (ISC), to produce the ISC-EHB. We begin with events in the modern period (2000-2014) and apply new and more rigorous procedures for event selection, data preparation, processing, and relocation. The ISC-EHB criteria selects events from the ISC Bulletin which have more than 15 teleseismic (> 280) time defining stations, with a secondary teleseismic azimuth gap of 3.75 (Di Giacomo & Storchak, 2016). These criteria minimize the location bias produced by 3D Earth structure, and select many events that are relatively well located in any given region. There are several processing steps; (1) EHB software relocates all the events using ISC starting depths; (2) Near station and secondary phase arrival residuals are reviewed and a depth is adopted or assigned according to best fit, and in some instances depths may be reassigned based on other sources (e.g., USGS broadband depths); (3) All events are relocated with their new depths and plotted in subduction zone cross sections, along with events from the ISC-GEM catalogue for comparison; (4) These plots are used to confirm or modify weakly constrained depths; (5) Using the final EHB depths the "ISCloc" (Bondar & Storchak, 2011) is used to relocate the events. The new ISC-EHB database will be most useful for global seismicity studies and high-frequency global tomographic inversions. This will be facilitated by online access to the ISC-EHB Catalogue and Bulletin via the ISC, and will include maps and cross sections of the seismicity in subduction zones. Example maps and cross sections for events in years 2000-2003 will be presented.

  16. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  17. The Amma-Sat Database

    Science.gov (United States)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on

  18. Ship collision with iceberg database

    Energy Technology Data Exchange (ETDEWEB)

    Hill, B.T. [National Research Council of Canada, St. John' s, NL (Canada). Inst. for Ocean Technology

    2006-11-15

    Approximately 20 per cent of collisions between icebergs, steam ships and motor vessels since 1850 have resulted in sinkings. The available data indicates that most sinkings were due to some kind of indirect impact rather than a head on collision. This paper presented the newly augmented Microsoft Access Database of Ship Collisions with Icebergs that includes more than 670 events over 200 years, most of which occurred on the Grand Banks. Other collisions occurred further afield in the Arctic, off Greenland and in the fjords of Alaska. The operation, search categories and data fields of the database were described along with various trends of collisions, scope of damage and environmental factors involved. Based on an estimate of the number of voyages over the Grand Banks, a probability of collision was derived from the number of cargo ship collisions over the past several years. The Microsoft Access template was provided by the Canadian Hydraulic Centre which had been developed by them for their Ice Regime Database to describe sea ice and ship interactions. This template was adapted and modified for iceberg collisions. Where possible, the data base was augmented to include information about the nature of the damage, the weather and sea state, the ice conditions, iceberg size, the vessel route and location at the time of collision, and vessel characteristics. The purpose of the database is to provide operators and regulators with an assessment of the frequency of collisions and environmental factors that played a role at the time of the collision. The database provides a basis to undertake risk analysis for vessels entering a given area and provides a better understanding of conditions under which collisions are likely to occur. It was concluded that although the trend of collisions has improved over the years with better observation and detection techniques, collisions still occur. Reduced visibility by fog, precipitation or low light conditions were found to be

  19. The Danish Prostate Cancer Database

    Directory of Open Access Journals (Sweden)

    Nguyen-Nielsen M

    2016-10-01

    Full Text Available Mary Nguyen-Nielsen,1,2 Søren Høyer,3 Søren Friis,4 Steinbjørn Hansen,5 Klaus Brasso,6 Erik Breth Jakobsen,7 Mette Moe,8 Heidi Larsson,9 Mette Søgaard,9 Anne Nakano,9,10 Michael Borre1 1Department of Urology, Aarhus University Hospital, Aarhus, 2Diet, Genes and Environment, Danish Cancer Society Research Center, Copenhagen, 3Department of Pathology, Aarhus University Hospital, Aarhus, 4Danish Cancer Society Research Centre, Danish Cancer Society, Copenhagen, 5Department of Oncology, Odense University Hospital, Odense, 6Copenhagen Prostate Cancer Center and Department of Urology, Rigshospitalet, University of Copenhagen, Copenhagen, 7Department of Urology, Næstved Hospital, Næstved, 8Department of Oncology, Aalborg University Hospital, Aalborg, 9Department of Clinical Epidemiology, Aarhus University Hospital, 10Competence Centre for Health Quality and Informatics (KCKS-Vest, Aarhus, Denmark Aim of database: The Danish Prostate Cancer Database (DAPROCAdata is a nationwide clinical cancer database that has prospectively collected data on patients with incident prostate cancer in Denmark since February 2010. The overall aim of the DAPROCAdata is to improve the quality of prostate cancer care in Denmark by systematically collecting key clinical variables for the purposes of health care monitoring, quality improvement, and research. Study population: All Danish patients with histologically verified prostate cancer are included in the DAPROCAdata. Main variables: The DAPROCAdata registers clinical data and selected characteristics for patients with prostate cancer at diagnosis. Data are collected from the linkage of nationwide health registries and supplemented with online registration of key clinical variables by treating physicians at urological and oncological departments. Main variables include Gleason scores, cancer staging, prostate-specific antigen values, and therapeutic measures (active surveillance, surgery, radiotherapy, endocrine

  20. The Danish Cardiac Rehabilitation Database

    Directory of Open Access Journals (Sweden)

    Zwisler AD

    2016-10-01

    The Regional Research Unit, Region Zealand, Roskilde, 19Department of Medicine, Cardiovascular Research Unit, Regional Hospital Herning, Herning, Denmark Aim of database: The Danish Cardiac Rehabilitation Database (DHRD aims to improve the quality of cardiac rehabilitation (CR to the benefit of patients with coronary heart disease (CHD.Study population: Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward.Main variables: Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year.Descriptive data: Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations.Conclusion: The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients. Keywords: secondary prevention, coronary heart disease, cardiovascular prevention, clinical quality registry