WorldWideScience

Sample records for points spatial database

  1. Efficient Storage of Large Volume Spatial and Temporal Point-Data in an Object-Oriented Database

    National Research Council Canada - National Science Library

    Oliver, David

    2002-01-01

    Data mining applications must deal with large volumes of data. In particular, Spatio-Temporal Information Systems must efficiently store and access potentially very large quantities of spatial and temporal data...

  2. Geodetic Control Points - Multi-State Control Point Database

    Data.gov (United States)

    NSGIC State | GIS Inventory — The Multi-State Control Point Database (MCPD) is a database of geodetic and mapping control covering Idaho and Montana. The control were submitted by registered land...

  3. Conditioning in spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This tutorial provides an introduction to conditioning in spatial point processes or so-called Palm distributions. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions...... in the general case. Finally we discuss some examples of specific models and applications...

  4. Enabling Semantic Queries Against the Spatial Database

    Directory of Open Access Journals (Sweden)

    PENG, X.

    2012-02-01

    Full Text Available The spatial database based upon the object-relational database management system (ORDBMS has the merits of a clear data model, good operability and high query efficiency. That is why it has been widely used in spatial data organization and management. However, it cannot express the semantic relationships among geospatial objects, making the query results difficult to meet the user's requirement well. Therefore, this paper represents an attempt to combine the Semantic Web technology with the spatial database so as to make up for the traditional database's disadvantages. In this way, on the one hand, users can take advantages of ORDBMS to store and manage spatial data; on the other hand, if the spatial database is released in the form of Semantic Web, the users could describe a query more concisely with the cognitive pattern which is similar to that of daily life. As a consequence, this methodology enables the benefits of both Semantic Web and the object-relational database (ORDB available. The paper discusses systematically the semantic enriched spatial database's architecture, key technologies and implementation. Subsequently, we demonstrate the function of spatial semantic queries via a practical prototype system. The query results indicate that the method used in this study is feasible.

  5. Modeling Spatial Data within Object Relational-Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-03-01

    Full Text Available Spatial data can refer to elements that help place a certain object in a certain area. These elements are latitude, longitude, points, geometric figures represented by points, etc. However, when translating these elements into data that can be stored in a computer, it all comes down to numbers. The interesting part that requires attention is how to memorize them in order to obtain fast and various spatial queries. This part is where the DBMS (Data Base Management System that contains the database acts in. In this paper, we analyzed and compared two object-relational DBMS that work with spatial data: Oracle and PostgreSQL.

  6. Clustering with Obstacles in Spatial Databases

    OpenAIRE

    El-Zawawy, Mohamed A.; El-Sharkawi, Mohamed E.

    2009-01-01

    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms have ignored the fact that physical obstacles such as rivers, lakes, and highways exist in the real world and could thus affect the result of the clustering. In this paper, we propose CPO, an efficient clustering technique to solve the problem of clustering in ...

  7. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified......, and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  8. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    2010-01-01

    In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points...... are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  9. Residual analysis for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Turner, R.; Møller, Jesper

    process. Residuals are ascribed to locations in the empty background, as well as to data points of the point pattern. We obtain variance formulae, and study standardised residuals. There is also an analogy between our spatial residuals and the usual residuals for (non-spatial) generalised linear models...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....

  10. Spatial Database Modeling for Indoor Navigation Systems

    Science.gov (United States)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  11. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  12. Modern statistics for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus

    We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs......, and Cox process models, diagnostic tools and model checking, Markov chain Monte Carlo algorithms, computational methods for likelihood-based inference, and quick non-likelihood approaches to inference....

  13. Modern Statistics for Spatial Point Processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus

    2007-01-01

    We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...... and Cox process models, diagnostic tools and model checking, Markov chain Monte Carlo algorithms, computational methods for likelihood-based inference, and quick non-likelihood approaches to inference....

  14. DESIGN AND CONSTRUCTION OF A FOREST SPATIAL DATABASE: AN APPLICATION

    Directory of Open Access Journals (Sweden)

    Turan Sönmez

    2006-11-01

    Full Text Available General Directorate of Forests (GDF has not yet created the spatial forest database to manage forest and catch the developed countries in forestry. The lack of spatial forest database results in collection of the spatial data redundancy, communication problems among the forestry organizations. Also it causes Turkish forestry to be backward of informatics’ era. To solve these problems; GDF should establish spatial forest database supported Geographic Information System (GIS. To design the spatial database, supported GIS, which provides accurate, on time and current data/info for decision makers and operators in forestry, and to develop sample interface program to apply and monitor classical forest management plans is paramount in contemporary forest management planning process. This research is composed of three major stages: (i spatial rototype database design considering required by the three hierarchical organizations of GDF (regional directorate of forests, forest enterprise, and territorial division, (ii user interface program developed to apply and monitor classical management plans based on the designed database, (iii the implementation of the designed database and its user interface in Artvin Central Planning Unit.

  15. Capacity constrained assignment in spatial databases

    DEFF Research Database (Denmark)

    U, Leong Hou; Yiu, Man Lung; Mouratidis, Kyriakos

    2008-01-01

    once) in M, (ii) the size of M is maximized (i.e., it comprises min{|P|, P q2Q q.k} pairs), and (iii) the total assignment cost (i.e., the sum of Euclidean distances within all pairs) is minimized. Thus, the CCA problem is to identify the assignment with the optimal overall quality; intuitively......, the quality of q's service to p in a given (q, p) pair is anti-proportional to their distance. Although max-flow algorithms are applicable to this problem, they require the complete distance-based bipartite graph between Q and P. For large spatial datasets, this graph is expensive to compute and it may be too...... large to fit in main memory. Motivated by this fact, we propose efficient algorithms for optimal assignment that employ novel edge-pruning strategies, based on the spatial properties of the problem. Additionally, we develop approximate (i.e., suboptimal) CCA solutions that provide a trade-off between...

  16. Spatial Stochastic Point Models for Reservoir Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Syversveen, Anne Randi

    1997-12-31

    The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.

  17. A sequential point process model and Bayesian inference for spatial point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    We introduce a flexible spatial point process model for spatial point patterns exhibiting linear structures, without incorporating a latent line process. The model is given by an underlying sequential point process model, i.e. each new point is generated given the previous points. Under this model...... points is such that the dependent cluster point is likely to occur closely to a previous cluster point. We demonstrate the flexibility of the model for producing point patterns with linear structures, and propose to use the model as the likelihood in a Bayesian setting when analyzing a spatial point...... pattern exhibiting linear structures but where the exact mechanism responsible for the formations of lines is unknown. We illustrate this methodology by analyzing two spatial point pattern data sets (locations of bronze age graves in Denmark and locations of mountain tops in Spain) without knowing which...

  18. Properties of residuals for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Møller, Jesper; Pakes, A.G.

    For any point process in $R^d$ that has a Papangelou conditional intensity $lambda$, we define a random measure of ‘innovations’ which has mean zero. When the point process model parameters are estimated from data, there is an analogous random measure of ‘residuals’. We analyse properties of the ...... of the innovations and residuals, including first and second moments, conditional independence, a martingale property, lack of correlation, and marginal distributions....

  19. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  20. Properties of residuals for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Møller, Jesper; Pakes, A. G.

    2008-01-01

    For any point process in Rd that has a Papangelou conditional intensity λ, we define a random measure of ‘innovations' which has mean zero. When the point process model parameters are estimated from data, there is an analogous random measure of ‘residuals'. We analyse properties of the innovation...... and residuals, including first and second moments, conditional independence, a martingale property, and lack of correlation. Some large sample asymptotics are studied. We derive the marginal distribution of smoothed residuals by solving a distributional equivalence....

  1. EigenScape: A Database of Spatial Acoustic Scene Recordings

    Directory of Open Access Journals (Sweden)

    Marc Ciufo Green

    2017-11-01

    Full Text Available The classification of acoustic scenes and events is an emerging area of research in the field of machine listening. Most of the research conducted so far uses spectral features extracted from monaural or stereophonic audio rather than spatial features extracted from multichannel recordings. This is partly due to the lack thus far of a substantial body of spatial recordings of acoustic scenes. This paper formally introduces EigenScape, a new database of fourth-order Ambisonic recordings of eight different acoustic scene classes. The potential applications of a spatial machine listening system are discussed before detailed information on the recording process and dataset are provided. A baseline spatial classification system using directional audio coding (DirAC techniques is detailed and results from this classifier are presented. The classifier is shown to give good overall scene classification accuracy across the dataset, with 7 of 8 scenes being classified with an accuracy of greater than 60% with an 11% improvement in overall accuracy compared to use of Mel-frequency cepstral coefficient (MFCC features. Further analysis of the results shows potential improvements to the classifier. It is concluded that the results validate the new database and show that spatial features can characterise acoustic scenes and as such are worthy of further investigation.

  2. Database modeling to integrate macrobenthos data in Spatial Data Infrastructure

    Directory of Open Access Journals (Sweden)

    José Alberto Quintanilha

    2012-08-01

    Full Text Available Coastal zones are complex areas that include marine and terrestrial environments. Besides its huge environmental wealth, they also attracts humans because provides food, recreation, business, and transportation, among others. Some difficulties to manage these areas are related with their complexity, diversity of interests and the absence of standardization to collect and share data to scientific community, public agencies, among others. The idea to organize, standardize and share this information based on Web Atlas is essential to support planning and decision making issues. The construction of a spatial database integrating the environmental business, to be used on Spatial Data Infrastructure (SDI is illustrated by a bioindicator that indicates the quality of the sediments. The models show the phases required to build Macrobenthos spatial database based on Santos Metropolitan Region as a reference. It is concluded that, when working with environmental data the structuring of knowledge in a conceptual model is essential for their subsequent integration into the SDI. During the modeling process it can be noticed that methodological issues related to the collection process may obstruct or prejudice the integration of data from different studies of the same area. The development of a database model, as presented in this study, can be used as a reference for further research with similar goals.

  3. Pointing Hand Stimuli Induce Spatial Compatibility Effects and Effector Priming

    Directory of Open Access Journals (Sweden)

    Akio eNishimura

    2013-04-01

    Full Text Available The present study investigated the automatic influence of perceiving a picture that indicates other’s action on one’s own task performance in terms of spatial compatibility and effector priming. Participants pressed left and right buttons with their left and right hands respectively, depending on the color of a central dot target. Preceding the target, a left or right hand stimulus (pointing either to the left or right with the index or little finger was presented. In Experiment 1, with brief presentation of the pointing hand, a spatial compatibility effect was observed: Responses were faster when the direction of the pointed finger and the response position were spatially congruent than when incongruent. The spatial compatibility effect was larger for the pointing index finger stimulus compared to the pointing little finger stimulus. Experiment 2 employed longer duration of the pointing hand stimuli. In addition to the spatial compatibility effect for the pointing index finger, the effector priming effect was observed: Responses were faster when the anatomical left/right identity of the pointing and response hands matched than when the pointing and response hands differed in left/right identity. The results indicate that with sufficient processing time, both spatial/symbolic and anatomical features of a static body part implying another’s action simultaneously influence different aspects of the perceiver’s own action. Hierarchical coding, according to which an anatomical code is used only when a spatial code is unavailable, may not be applicable if stimuli as well as responses contain anatomical features.

  4. Spatial interactions database development for effective probabilistic risk assessment

    International Nuclear Information System (INIS)

    Liming, J. K.; Dunn, R. F.

    2008-01-01

    In preparation for a subsequent probabilistic risk assessment (PRA) fire risk analysis update, the STP Nuclear Operating Company (STPNOC) is updating its spatial interactions database (SID). This work is being performed to support updating the spatial interactions analysis (SIA) initially performed for the original South Texas Project Electric Generating Station (STPEGS) probabilistic safely assessment (PSA) and updated in the STPEGS Level 2 PSA and IPE Report. S/A is a large-scope screening analysis performed for nuclear power plant PRA that serves as a prerequisite basis for more detailed location-dependent, hazard-spec analyses in the PRA, such as fire risk analysis, flooding risk analysis, etc. SIA is required to support the 'completeness' argument for the PRA scope. The objectives of the current SID development effort are to update the spatial interactions analysis data, to the greatest degree practical, to be consistent with the following: the as-built plant as of December 31, 2007 the in-effect STPNOC STPEGS Units 1 and 2 PRA the current technology and intent of NUREG/CR-6850 guidance for lire risk analysis database support the requirements for PRA SIA, including fire and flooding risk analysis, established by NRC Regulatory Guide 1.200 and the ASME PRA Standard (ASME RA-S-2002 updated through ASME RA-Sc-2007,) This paper presents the approach and methodology for state-of-the-art SID development and applications, including an overview of the SIA process for nuclear power plant PRA. The paper shows how current relational database technology and existing, conventional station information sources can be employed to collect, process, and analyze spatial interactions data for the plant in an effective and efficient manner to meet the often challenging requirements of industry guidelines and standards such as NUREG/CR-6850, NRC Regulatory Guide 1.200, and ASME RA-S-2002 (updated through ASME RA-Sc 2007). This paper includes tables and figures illustrating how SIA

  5. Review of Spatial-Database System Usability: Recommendations for the ADDNS Project

    National Research Council Canada - National Science Library

    Abdalla, R. M; Niall, K. K

    2007-01-01

    ...) and three-dimensional (3D) visualizations. This report presents an overview of the basic concepts of GIS and spatial databases, provides an analytical usability evaluation and critically analyses different spatial- database applications...

  6. A tutorial on Palm distributions for spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...... discuss some examples of Palm distributions for specific models and some applications....

  7. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is

  8. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel

    2016-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software package PySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described

  9. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test ...... for one group of point patterns, comparison of several groups of point patterns, test of dependence of components in a multi-type point pattern, and test of Boolean assumption for random closed sets....

  10. Global anthropogenic heat flux database with high spatial resolution

    Science.gov (United States)

    Dong, Y.; Varquez, A. C. G.; Kanda, M.

    2017-02-01

    This study developed a top-down method for estimating global anthropogenic heat emission (AHE), with a high spatial resolution of 30 arc-seconds and temporal resolution of 1 h. Annual average AHE was derived from human metabolic heating and primary energy consumption, which was further divided into three components based on consumer sector. The first and second components were heat loss and heat emissions from industrial sectors equally distributed throughout the country and populated areas, respectively. The third component comprised the sum of emissions from commercial, residential, and transportation sectors (CRT). Bulk AHE from the CRT was proportionally distributed using a global population dataset, with a radiance-calibrated nighttime lights adjustment. An empirical function to estimate monthly fluctuations of AHE based on gridded monthly temperatures was derived from various Japanese and American city measurements. Finally, an AHE database with a global coverage was constructed for the year 2013. Comparisons between our proposed AHE and other existing datasets revealed that the problem of overestimation of AHE intensity in previous top-down models was mitigated by the separation of energy consumption sectors; furthermore, the problem of AHE underestimation at central urban areas was solved by the nighttime lights adjustment. A strong agreement in the monthly profiles of AHE between our database and other bottom-up datasets further proved the validity of the current methodology. Investigations of AHE for the 29 largest urban agglomerations globally highlighted that the share of heat emissions from CRT sectors to the total AHE at the city level was 40-95%; whereas that of metabolic heating varied with the city's level of development by a range of 2-60%. A negative correlation between gross domestic product (GDP) and the share of metabolic heating to a city's total AHE was found. Globally, peak AHE values were found to occur between December and February, while

  11. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  12. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a seco...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....

  13. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    2009-01-01

    The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....

  14. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results...... are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...... of field quantities The properties of different interpolation functions are analysed using numerical examples, including the classical cantil-evered beam problem....

  15. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Science.gov (United States)

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable. PMID:22399959

  16. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies

    Directory of Open Access Journals (Sweden)

    Xiaohuan Yang

    2009-02-01

    Full Text Available The spatial distribution of population is closely related to land use and land cover (LULC patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B data integrated with a Pattern Decomposition Method (PDM and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM. The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  17. Two-point orientation discrimination versus the traditional two-point test for tactile spatial acuity assessment

    Directory of Open Access Journals (Sweden)

    Jonathan eTong

    2013-09-01

    Full Text Available Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants’ psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological

  18. Knowledge Based Engineering for Spatial Database Management and Use

    Science.gov (United States)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  19. A hierarchical spatial framework and database for the national river fish habitat condition assessment

    Science.gov (United States)

    Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.

    2011-01-01

    Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.

  20. Development of spatial scaling technique of forest health sample point information

    Science.gov (United States)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  1. Large-scale spatial population databases in infectious disease research

    Directory of Open Access Journals (Sweden)

    Linard Catherine

    2012-03-01

    Full Text Available Abstract Modelling studies on the spatial distribution and spread of infectious diseases are becoming increasingly detailed and sophisticated, with global risk mapping and epidemic modelling studies now popular. Yet, in deriving populations at risk of disease estimates, these spatial models must rely on existing global and regional datasets on population distribution, which are often based on outdated and coarse resolution data. Moreover, a variety of different methods have been used to model population distribution at large spatial scales. In this review we describe the main global gridded population datasets that are freely available for health researchers and compare their construction methods, and highlight the uncertainties inherent in these population datasets. We review their application in past studies on disease risk and dynamics, and discuss how the choice of dataset can affect results. Moreover, we highlight how the lack of contemporary, detailed and reliable data on human population distribution in low income countries is proving a barrier to obtaining accurate large-scale estimates of population at risk and constructing reliable models of disease spread, and suggest research directions required to further reduce these barriers.

  2. Multiple k Nearest Neighbor Query Processing in Spatial Network Databases

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... neighbor query processing....

  3. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework

    Science.gov (United States)

    Wang, Lizhu; Riseng, Catherine M.; Mason, Lacey; Werhrly, Kevin; Rutherford, Edward; McKenna, James E.; Castiglione, Chris; Johnson, Lucinda B.; Infante, Dana M.; Sowa, Scott P.; Robertson, Mike; Schaeffer, Jeff; Khoury, Mary; Gaiot, John; Hollenhurst, Tom; Brooks, Colin N.; Coscarelli, Mark

    2015-01-01

    Managing the world's largest and most complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that is comparable across the region. To meet such a need, we developed a spatial classification framework and database — Great Lakes Aquatic Habitat Framework (GLAHF). GLAHF consists of catchments, coastal terrestrial, coastal margin, nearshore, and offshore zones that encompass the entire Great Lakes Basin. The catchments captured in the database as river pour points or coastline segments are attributed with data known to influence physicochemical and biological characteristics of the lakes from the catchments. The coastal terrestrial zone consists of 30-m grid cells attributed with data from the terrestrial region that has direct connection with the lakes. The coastal margin and nearshore zones consist of 30-m grid cells attributed with data describing the coastline conditions, coastal human disturbances, and moderately to highly variable physicochemical and biological characteristics. The offshore zone consists of 1.8-km grid cells attributed with data that are spatially less variable compared with the other aquatic zones. These spatial classification zones and their associated data are nested within lake sub-basins and political boundaries and allow the synthesis of information from grid cells to classification zones, within and among political boundaries, lake sub-basins, Great Lakes, or within the entire Great Lakes Basin. This spatially structured database could help the development of basin-wide management plans, prioritize locations for funding and specific management actions, track protection and restoration progress, and conduct research for science-based decision making.

  4. Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python

    Science.gov (United States)

    Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor

    2017-04-01

    As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via

  5. Hidden Second-order Stationary Spatial Point Processes

    DEFF Research Database (Denmark)

    Hahn, Ute; Jensen, Eva B. Vedel

    2016-01-01

    In the existing statistical literature, the almost default choice for inference on inhomogeneous point processes is the most well-known model class for inhomogeneous point processes: reweighted second-order stationary processes. In particular, the K-function related to this type of inhomogeneity ....... Using the new theoretical framework, we reanalyse three inhomogeneous point patterns that have earlier been analysed in the statistical literature and show that the conclusions concerning an appropriate model class must be revised for some of the point patterns.......In the existing statistical literature, the almost default choice for inference on inhomogeneous point processes is the most well-known model class for inhomogeneous point processes: reweighted second-order stationary processes. In particular, the K-function related to this type of inhomogeneity...

  6. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  7. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  8. Geometric anisotropic spatial point pattern analysis and Cox processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Toftaker, Håkon

    . In particular we study Cox process models with an elliptical pair correlation function, including shot noise Cox processes and log Gaussian Cox processes, and we develop estimation procedures using summary statistics and Bayesian methods. Our methodology is illustrated on real and synthetic datasets of spatial...

  9. Pan European Phenological database (PEP725): a single point of access for European data

    Science.gov (United States)

    Templ, Barbara; Koch, Elisabeth; Bolmgren, Kjell; Ungersböck, Markus; Paul, Anita; Scheifinger, Helfried; Rutishauser, This; Busto, Montserrat; Chmielewski, Frank-M.; Hájková, Lenka; Hodzić, Sabina; Kaspar, Frank; Pietragalla, Barbara; Romero-Fresneda, Ramiro; Tolvanen, Anne; Vučetič, Višnja; Zimmermann, Kirsten; Zust, Ana

    2018-02-01

    The Pan European Phenology (PEP) project is a European infrastructure to promote and facilitate phenological research, education, and environmental monitoring. The main objective is to maintain and develop a Pan European Phenological database (PEP725) with an open, unrestricted data access for science and education. PEP725 is the successor of the database developed through the COST action 725 "Establishing a European phenological data platform for climatological applications" working as a single access point for European-wide plant phenological data. So far, 32 European meteorological services and project partners from across Europe have joined and supplied data collected by volunteers from 1868 to the present for the PEP725 database. Most of the partners actively provide data on a regular basis. The database presently holds almost 12 million records, about 46 growing stages and 265 plant species (including cultivars), and can be accessed via http://www.pep725.eu/. Users of the PEP725 database have studied a diversity of topics ranging from climate change impact, plant physiological question, phenological modeling, and remote sensing of vegetation to ecosystem productivity.

  10. Pan European Phenological database (PEP725): a single point of access for European data.

    Science.gov (United States)

    Templ, Barbara; Koch, Elisabeth; Bolmgren, Kjell; Ungersböck, Markus; Paul, Anita; Scheifinger, Helfried; Rutishauser, This; Busto, Montserrat; Chmielewski, Frank-M; Hájková, Lenka; Hodzić, Sabina; Kaspar, Frank; Pietragalla, Barbara; Romero-Fresneda, Ramiro; Tolvanen, Anne; Vučetič, Višnja; Zimmermann, Kirsten; Zust, Ana

    2018-02-18

    The Pan European Phenology (PEP) project is a European infrastructure to promote and facilitate phenological research, education, and environmental monitoring. The main objective is to maintain and develop a Pan European Phenological database (PEP725) with an open, unrestricted data access for science and education. PEP725 is the successor of the database developed through the COST action 725 "Establishing a European phenological data platform for climatological applications" working as a single access point for European-wide plant phenological data. So far, 32 European meteorological services and project partners from across Europe have joined and supplied data collected by volunteers from 1868 to the present for the PEP725 database. Most of the partners actively provide data on a regular basis. The database presently holds almost 12 million records, about 46 growing stages and 265 plant species (including cultivars), and can be accessed via http://www.pep725.eu/ . Users of the PEP725 database have studied a diversity of topics ranging from climate change impact, plant physiological question, phenological modeling, and remote sensing of vegetation to ecosystem productivity.

  11. Assessing land use/cover changes: a nationwide multidate spatial database for Mexico

    Science.gov (United States)

    Mas, Jean-François; Velázquez, Alejandro; Díaz-Gallegos, José Reyes; Mayorga-Saucedo, Rafael; Alcántara, Camilo; Bocco, Gerardo; Castro, Rutilio; Fernández, Tania; Pérez-Vega, Azucena

    2004-10-01

    A nationwide multidate GIS database was generated in order to carry out the quantification and spatial characterization of land use/cover changes (LUCC) in Mexico. Existing cartography on land use/cover at a 1:250,000 scale was revised to select compatible inputs regarding the scale, the classification scheme and the mapping method. Digital maps from three different dates (the late 1970s, 1993 and 2000) were revised, evaluated, corrected and integrated into a GIS database. In order to improve the reliability of the database, an attempt was made to assess the accuracy of the digitalisation procedure and to detect and correct unlikely changes due to thematic errors in the maps. Digital maps were overlaid in order to generate LUCC maps, transition matrices and to calculate rates of conversion. Based upon this database, rates of deforestation between 1976 and 2000 were evaluated as 0.25 and 0.76% per year for temperate and tropical forests, respectively.

  12. Database guided detection of anatomical landmark points in 3D images of the heart

    Science.gov (United States)

    Karavides, Thomas; Esther Leung, K. Y.; Paclik, Pavel; Hendriks, Emile A.; Bosch, Johan G.

    2010-03-01

    Automated landmark detection may prove invaluable in the analysis of real-time three-dimensional (3D) echocardiograms. By detecting 3D anatomical landmark points, the standard anatomical views can be extracted automatically in apically acquired 3D ultrasound images of the left ventricle, for better standardization of visualization and objective diagnosis. Furthermore, the landmarks can serve as an initialization for other analysis methods, such as segmentation. The described algorithm applies landmark detection in perpendicular planes of the 3D dataset. The landmark detection exploits a large database of expert annotated images, using an extensive set of Haar features for fast classification. The detection is performed using two cascades of Adaboost classifiers in a coarse to fine scheme. The method is evaluated by measuring the distance of detected and manually indicated landmark points in 25 patients. The method can detect landmarks accurately in the four-chamber (apex: 7.9+/-7.1mm, septal mitral valve point: 5.6+/-2.7mm lateral mitral valve point: 4.0+/-2.6mm) and two-chamber view (apex: 7.1+/-6.7mm, anterior mitral valve point: 5.8+/-3.5mm, inferior mitral valve point: 4.5+/-3.1mm). The results compare well to those reported by others.

  13. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  14. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  15. SPROUTS: a database for the evaluation of protein stability upon point mutation.

    Science.gov (United States)

    Lonquety, Mathieu; Lacroix, Zoé; Papandreou, Nikolaos; Chomilier, Jacques

    2009-01-01

    SPROUTS (Structural Prediction for pRotein fOlding UTility System) is a new database that provides access to various structural data sets and integrated functionalities not yet available to the community. The originality of the SPROUTS database is the ability to gain access to a variety of structural analyses at one place and with a strong interaction between them. SPROUTS currently combines data pertaining to 429 structures that capture representative folds and results related to the prediction of critical residues expected to belong to the folding nucleus: the MIR (Most Interacting Residues), the description of the structures in terms of modular fragments: the TEF (Tightened End Fragments), and the calculation at each position of the free energy change gradient upon mutation by one of the 19 amino acids. All database results can be displayed and downloaded in textual files and Excel spreadsheets and visualized on the protein structure. SPROUTS is a unique resource to access as well as visualize state-of-the-art characteristics of protein folding and analyse the effect of point mutations on protein structure. It is available at http://bioinformatics.eas.asu.edu/sprouts.html.

  16. In-Database Raster Analytics: Map Algebra and Parallel Processing in Oracle Spatial Georaster

    Science.gov (United States)

    Xie, Q. J.; Zhang, Z. Z.; Ravada, S.

    2012-07-01

    Over the past decade several products have been using enterprise database technology to store and manage geospatial imagery and raster data inside RDBMS, which in turn provides the best manageability and security. With the data volume growing exponentially, real-time or near real-time processing and analysis of such big data becomes more challenging. Oracle Spatial GeoRaster, different from most other products, takes the enterprise database-centric approach for both data management and data processing. This paper describes one of the central components of this database-centric approach: the processing engine built completely inside the database. Part of this processing engine is raster algebra, which we call the In-database Raster Analytics. This paper discusses the three key characteristics of this in-database analytics engine and the benefits. First, it moves the data processing closer to the data instead of moving the data to the processing, which helps achieve greater performance by overcoming the bottleneck of computer networks. Second, we designed and implemented a new raster algebra expression language. This language is based on PL/SQL and is currently focused on the "local" function type of map algebra. This language includes general arithmetic, logical and relational operators and any combination of them, which dramatically improves the analytical capability of the GeoRaster database. The third feature is the implementation of parallel processing of such operations to further improve performance. This paper also presents some sample use cases. The testing results demonstrate that this in-database approach for raster analytics can effectively help solve the biggest performance challenges we are facing today with big raster and image data.

  17. Temporal - spatial dynamics of vegetation variation on non - point source nutrient pollution

    NARCIS (Netherlands)

    Ouyang, Wei; Xuelei Wang,; Hao, Fanghua; Srinivasan, R.

    2009-01-01

    The temporal-spatial interaction of land cover and non-point source (NPS) nutrient pollution were analyzed with the Soil and Water Assessment Tool (SWAT) to simulate the temporal-spatial features of NPS nutrient loading in the upper stream of the Yellow River catchment. The corresponding land cover

  18. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  19. A framework for evaluation of deformable image registration spatial accuracy using large landmark point sets

    International Nuclear Information System (INIS)

    Castillo, Richard; Castillo, Edward; Guerra, Rudy; Johnson, Valen E; McPhail, Travis; Garg, Amit K; Guerrero, Thomas

    2009-01-01

    Expert landmark correspondences are widely reported for evaluating deformable image registration (DIR) spatial accuracy. In this report, we present a framework for objective evaluation of DIR spatial accuracy using large sets of expert-determined landmark point pairs. Large samples (>1100) of pulmonary landmark point pairs were manually generated for five cases. Estimates of inter- and intra-observer variation were determined from repeated registration. Comparative evaluation of DIR spatial accuracy was performed for two algorithms, a gradient-based optical flow algorithm and a landmark-based moving least-squares algorithm. The uncertainty of spatial error estimates was found to be inversely proportional to the square root of the number of landmark point pairs and directly proportional to the standard deviation of the spatial errors. Using the statistical properties of this data, we performed sample size calculations to estimate the average spatial accuracy of each algorithm with 95% confidence intervals within a 0.5 mm range. For the optical flow and moving least-squares algorithms, the required sample sizes were 1050 and 36, respectively. Comparative evaluation based on fewer than the required validation landmarks results in misrepresentation of the relative spatial accuracy. This study demonstrates that landmark pairs can be used to assess DIR spatial accuracy within a narrow uncertainty range.

  20. Performance of Point and Range Queries for In-memory Databases using Radix Trees on GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Maksudul [ORNL; Yoginath, Srikanth B [ORNL; Perumalla, Kalyan S [ORNL

    2016-01-01

    In in-memory database systems augmented by hardware accelerators, accelerating the index searching operations can greatly increase the runtime performance of database queries. Recently, adaptive radix trees (ART) have been shown to provide very fast index search implementation on the CPU. Here, we focus on an accelerator-based implementation of ART. We present a detailed performance study of our GPU-based adaptive radix tree (GRT) implementation over a variety of key distributions, synthetic benchmarks, and actual keys from music and book data sets. The performance is also compared with other index-searching schemes on the GPU. GRT on modern GPUs achieves some of the highest rates of index searches reported in the literature. For point queries, a throughput of up to 106 million and 130 million lookups per second is achieved for sparse and dense keys, respectively. For range queries, GRT yields 600 million and 1000 million lookups per second for sparse and dense keys, respectively, on a large dataset of 64 million 32-bit keys.

  1. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  2. Modelling a critical infrastructure-driven spatial database for proactive disaster management: A developing country context

    Directory of Open Access Journals (Sweden)

    David O. Baloye

    2016-04-01

    Full Text Available The understanding and institutionalisation of the seamless link between urban critical infrastructure and disaster management has greatly helped the developed world to establish effective disaster management processes. However, this link is conspicuously missing in developing countries, where disaster management has been more reactive than proactive. The consequence of this is typified in poor response time and uncoordinated ways in which disasters and emergency situations are handled. As is the case with many Nigerian cities, the challenges of urban development in the city of Abeokuta have limited the effectiveness of disaster and emergency first responders and managers. Using geospatial techniques, the study attempted to design and deploy a spatial database running a web-based information system to track the characteristics and distribution of critical infrastructure for effective use during disaster and emergencies, with the purpose of proactively improving disaster and emergency management processes in Abeokuta. Keywords: Disaster Management; Emergency; Critical Infrastructure; Geospatial Database; Developing Countries; Nigeria

  3. Second-order analysis of inhomogeneous spatial point processes with proportional intensity functions

    DEFF Research Database (Denmark)

    Guan, Yongtao; Waagepetersen, Rasmus; Beale, Colin M.

    2008-01-01

    of the intensity functions. The first approach is based on nonparametric kernel-smoothing, whereas the second approach uses a conditional likelihood estimation approach to fit a parametric model for the pair correlation function. A great advantage of the proposed methods is that they do not require the often...... to two spatial point patterns regarding the spatial distributions of birds in the U.K.'s Peak District in 1990 and 2004....

  4. A Studentized Permutation Test for the Comparison of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Hahn, Ute

    A new test is proposed for the hypothesis that two (or more) observed point patterns are realizations of the same spatial point process model. To this end, the point patterns are divided into disjoint quadrats, on each of which an estimate of Ripley's K-function is calculated. The two groups...... of empirical K-functions are compared by a permutation test using a studentized test statistic. The proposed test performs convincingly in terms of empirical level and power in a simulation study, even for point patterns where the K-function estimates on neighboring subsamples are not strictly exchangeable...

  5. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  6. Application of the Multitype Strauss Point Model for Characterizing the Spatial Distribution of Landslides

    Directory of Open Access Journals (Sweden)

    Iswar Das

    2016-01-01

    Full Text Available Landslides are common but complex natural hazards. They occur on the Earth’s surface following a mass movement process. This study applies the multitype Strauss point process model to analyze the spatial distributions of small and large landslides along with geoenvironmental covariates. It addresses landslides as a set of irregularly distributed point-type locations within a spatial region. Their intensity and spatial interactions are analyzed by means of the distance correlation functions, model fitting, and simulation. We use as a dataset the landslide occurrences for 28 years from a landslide prone road corridor in the Indian Himalayas. The landslides are investigated for their spatial character, that is, whether they show inhibition or occur as a regular or a clustered point pattern, and for their interaction with landslides in the neighbourhood. Results show that the covariates lithology, land cover, road buffer, drainage density, and terrain units significantly improved model fitting. A comparison of the output made with logistic regression model output showed a superior prediction performance for the multitype Strauss model. We compared results of this model with the multitype/hard core Strauss point process model that further improved the modeling. Results from the study can be used to generate landslide susceptibility scenarios. The paper concludes that a multitype Strauss point process model enriches the set of statistical tools that can comprehensively analyze landslide data.

  7. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    Science.gov (United States)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  8. The National Neurosurgery Quality and Outcomes Database and NeuroPoint Alliance: rationale, development, and implementation.

    Science.gov (United States)

    Asher, Anthony L; McCormick, Paul C; Selden, Nathan R; Ghogawala, Zoher; McGirt, Matthew J

    2013-01-01

    Patient care data will soon inform all areas of health care decision making and will define clinical performance. Organized neurosurgery believes that prospective, systematic tracking of practice patterns and patient outcomes will allow neurosurgeons to improve the quality and efficiency and, ultimately, the value of care. In support of this mission, the American Association of Neurological Surgeons, in cooperation with a broad coalition of other neurosurgical societies including the Congress of Neurological Surgeons, Society of Neurological Surgeons, and American Board of Neurological Surgery, created the NeuroPoint Alliance (NPA), a not-for-profit corporation, in 2008. The NPA coordinates a variety of national projects involving the acquisition, analysis, and reporting of clinical data from neurosurgical practice using online technologies. It was designed to meet the health care quality and related research needs of individual neurosurgeons and neurosurgical practices, national organizations, health care plans, biomedical industry, and government agencies. To meet the growing need for tools to measure and promote high-quality care, NPA collaborated with several national stakeholders to create an unprecedented program: the National Neurosurgery Quality and Outcomes Database (N(2)QOD). This resource will allow any US neurosurgeon, practice group, or hospital system to contribute to and access aggregate quality and outcomes data through a centralized, nationally coordinated clinical registry. This paper describes the practical and scientific justifications for a national neurosurgical registry; the conceptualization, design, development, and implementation of the N(2)QOD; and the likely role of prospective, cooperative clinical data collection systems in evolving systems of neurosurgical training, continuing education, research, public reporting, and maintenance of certification.

  9. Current data warehousing and OLAP technologies’ status applied to spatial databases

    Directory of Open Access Journals (Sweden)

    Diego Orlando Abril Fradel

    2007-01-01

    Full Text Available Organisations require their information on a timely, dynamic, friendly, centralised and easy-to-access basis for analysing it and taking correct decisions at the right time. Centralisation can be achieved with data warehouse technology. On-line analytical processing (OLAP is used for analysis. Technologies using graphics and maps in data presentation can be exploited for an overall view of a company and helping to take better decisions. Geo- graphic information systems (GIS are useful for spatially locating information and representing it using maps. Data warehouses are generally implemented with a multidimensional data model to make OLAP analysis easier. A fundamental point in this model is the definition of measurements and dimensions; geography lies within such dimensions. Many researchers have concluded that the geographic dimension is another attribute for describing data in current analysis systems but without having an in-depth study of its spatial feature and without locating them on a map, like GIS does. Seen this way, interoperability is necessary between GIS and OLAP (called spatial OLAP or SOLAP and several entities are currently researching this. This document summarises the current status of such research.

  10. Spatial Multiplication Model as an alternative to the Point Model in Neutron Multiplicity Counting

    Energy Technology Data Exchange (ETDEWEB)

    Hauck, Danielle K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-26

    The point model is commonly used in neutron multiplicity counting to relate the correlated neutron detection rates (singles, doubles, triples) to item properties (mass, (α,n) reaction rate and neutron multiplication). The point model assumes that the probability that a neutron will induce fission is a constant across the physical extent of the item. However, in reality, neutrons near the center of an item have a greater probability of inducing fission then items near the edges. As a result, the neutron multiplication has a spatial distribution.

  11. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  12. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    Science.gov (United States)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  13. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    Science.gov (United States)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others

  14. The Information Seeking Interface with Spatial Icons for the Children Digital-learning Database

    Directory of Open Access Journals (Sweden)

    吳可久、林佳蓉、陳泓均、柯皓仁 Ko-Chiu Wu,Chia-Jung Lin,Hung-Chun Chen,Hao-Ren Ke

    2014-04-01

    Full Text Available In this age of information technology, children must develop the ability to search digital databases.However, the information-seeking behavior and cognitive abilities associated with language and images differ substantially between children and adults. Therefore there is an urgent need foran information-searching interface customized for children. Drawing on the design of computer games, we created a three-dimensional (3D human-computer interface (HCI. Children’s experience playing computer games can therefore inform way-finding and information-seeking behavior inthis spatially-oriented interface. Three types of HCI were developed: a 2D graphic hyperlink (GH,a 3D extended survey (ES, and a 3D extended route (ER. These were tested for efficiency, effectiveness, and time of operation by one-way analysis of variance. Our results indicated that children behave differently on the various interfaces. The proposed HCI is a helpful tool offering children a knowledge map that enables them to search for the information they need. Our results demonstrate that information visualization theory and concept association are topics worthy offurther study in the development of a child-oriented information-seeking interface. pp. 51-65

  15. Identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis by using the Delphi Technique

    Science.gov (United States)

    Halim, N. Z. A.; Sulaiman, S. A.; Talib, K.; Ng, E. G.

    2018-02-01

    This paper explains the process carried out in identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the technical relevance standpoint. Three statements describing the relevant features of NDCDB for spatial analysis were established after three rounds of consensus building. It highlighted the NDCDB’s characteristics such as its spatial accuracy, functions, and criteria as a facilitating tool for spatial analysis. By recognising the relevant features of NDCDB for spatial analysis in this study, practical application of NDCDB for various analysis and purpose can be widely implemented.

  16. [Spatial heterogeneity and classified control of agricultural non-point source pollution in Huaihe River Basin].

    Science.gov (United States)

    Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua

    2013-02-01

    Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.

  17. Analysis on the flood vulnerability in the Seoul and Busan metropolitan area, Korea using spatial database

    Science.gov (United States)

    Lee, Mung-Jin

    2015-04-01

    In the future, temperature rises and precipitation increases are expected from climate change due to global warming. Concentrated heavy rain, typhoons, flooding, and other weather phenomena bring hydrologic variations. In this study, the flood susceptibility of the Seoul and Busan metropolitan area was analyzed and validated using a GIS based on a frequency ratio model and a logistic regression model with training and validation datasets of the flooded area. The flooded area in 2010 was used to train the model, and the flooded area in 2011 was used to validate the model. Using data is that topographic, geological, and soil data from the study areas were collected, processed, and digitized for use in a GIS. Maps relevant to the specific capacity were assembled in a spatial database. Then, flood susceptibility maps were created. Finally, the flood susceptibility maps were validated using the flooded area in 2011, which was not used for training. To represent the flood susceptible areas, this study used the probability-frequency ratio. The frequency ratio is the probability of occurrence of a certain attribute. Logistic regression allows for investigation of multivariate regression relations between one dependent and several independent variables. Logistic regression has a limit in that the calculation process cannot be traced because it repeats calculations to find the optimized regression equation for determining the possibility that the dependent variable will occur. In case of Seoul, The frequency ratio and logistic regression model results showed 79.61% and 79.05% accuracy. And the case of Busan, logistic regression model results showed 82.30%. This information and the maps generated from it could be applied to flood prevention and management. In addition, the susceptibility map provides meaningful information for decision-makers regarding priority areas for implementing flood mitigation policies.

  18. Spatially heterogeneous dynamics investigated via a time-dependent four-point density correlation function

    DEFF Research Database (Denmark)

    Lacevic, N.; Starr, F. W.; Schrøder, Thomas

    2003-01-01

    Relaxation in supercooled liquids above their glass transition and below the onset temperature of "slow" dynamics involves the correlated motion of neighboring particles. This correlated motion results in the appearance of spatially heterogeneous dynamics or "dynamical heterogeneity." Traditional...... two-point time-dependent density correlation functions, while providing information about the transient "caging" of particles on cooling, are unable to provide sufficiently detailed information about correlated motion and dynamical heterogeneity. Here, we study a four-point, time-dependent density......-q behavior of S4(q,t) provides an estimate of the range of correlated particle motion. We find that xi4(t) has a maximum as a function of time t, and that the value of the maximum of xi4(t) increases steadily from less than one particle diameter to a value exceeding nine particle diameters in the temperature...

  19. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying

    2014-07-15

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  20. A spatial point pattern analysis in Drosophila blastoderm embryos evaluating the potential inheritance of transcriptional states.

    Directory of Open Access Journals (Sweden)

    Feng He

    Full Text Available The Drosophila blastoderm embryo undergoes rapid cycles of nuclear division. This poses a challenge to genes that need to reliably sense the concentrations of morphogen molecules to form desired expression patterns. Here we investigate whether the transcriptional state of hunchback (hb, a target gene directly activated by the morphogenetic protein Bicoid (Bcd, exhibits properties indicative of inheritance between mitotic cycles. To achieve this, we build a dataset of hb transcriptional states at the resolution of individual nuclei in embryos at early cycle 14. We perform a spatial point pattern (SPP analysis to evaluate the spatial relationships among the nuclei that have distinct numbers of hb gene copies undergoing active transcription in snapshots of embryos. Our statistical tests and simulation studies reveal properties of dispersed clustering for nuclei with both or neither copies of hb undergoing active transcription. Modeling of nuclear lineages from cycle 11 to cycle 14 suggests that these two types of nuclei can achieve spatial clustering when, and only when, the transcriptional states are allowed to propagate between mitotic cycles. Our results are consistent with the possibility where the positional information encoded by the Bcd morphogen gradient may not need to be decoded de novo at all mitotic cycles in the Drosophila blastoderm embryo.

  1. spBayes: An R Package for Univariate and Multivariate Hierarchical Point-referenced Spatial Models.

    Science.gov (United States)

    Finley, Andrew O; Banerjee, Sudipto; Carlin, Bradley P

    2007-04-01

    Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude-longitude, Easting-Northing etc.) in a region of study. Such point-referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC) methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point-referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.

  2. spBayes: An R Package for Univariate and Multivariate Hierarchical Point-referenced Spatial Models

    Directory of Open Access Journals (Sweden)

    Andrew O. Finley

    2007-04-01

    Full Text Available Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc. in a region of study. Such point-referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point-referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.

  3. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  4. Monitoring hillslope moisture dynamics with surface ERT for enhancing spatial significance of hydrometric point measurements

    Science.gov (United States)

    Hübner, R.; Heller, K.; Günther, T.; Kleber, A.

    2015-01-01

    Besides floodplains, hillslopes are basic units that mainly control water movement and flow pathways within catchments of subdued mountain ranges. The structure of their shallow subsurface affects water balance, e.g. infiltration, retention, and runoff. Nevertheless, there is still a gap in the knowledge of the hydrological dynamics on hillslopes, notably due to the lack of generalization and transferability. This study presents a robust multi-method framework of electrical resistivity tomography (ERT) in addition to hydrometric point measurements, transferring hydrometric data into higher spatial scales to obtain additional patterns of distribution and dynamics of soil moisture on a hillslope. A geoelectrical monitoring in a small catchment in the eastern Ore Mountains was carried out at weekly intervals from May to December 2008 to image seasonal moisture dynamics on the hillslope scale. To link water content and electrical resistivity, the parameters of Archie's law were determined using different core samples. To optimize inversion parameters and methods, the derived spatial and temporal water content distribution was compared to tensiometer data. The results from ERT measurements show a strong correlation with the hydrometric data. The response is congruent to the soil tension data. Water content calculated from the ERT profile shows similar variations as that of water content from soil moisture sensors. Consequently, soil moisture dynamics on the hillslope scale may be determined not only by expensive invasive punctual hydrometric measurements, but also by minimally invasive time-lapse ERT, provided that pedo-/petrophysical relationships are known. Since ERT integrates larger spatial scales, a combination with hydrometric point measurements improves the understanding of the ongoing hydrological processes and better suits identification of heterogeneities.

  5. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel; Goldstein, Richard H.

    2015-11-01

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.

  6. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    Science.gov (United States)

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  7. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning

    Directory of Open Access Journals (Sweden)

    Victoria Plaza-Leiva

    2017-03-01

    Full Text Available Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM, Gaussian processes (GP, and Gaussian mixture models (GMM. A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl. Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  8. Spatial distribution of lacunarity of voxelized airborne LiDAR point clouds in various forest assemblages

    Science.gov (United States)

    Székely, Balázs; Kania, Adam; Standovár, Tibor; Heilmeier, Hermann

    2015-04-01

    Forest ecosystems have characteristic structure of features defined by various structural elements of different scales and vertical positions: shrub layers, understory vegetation, tree trunks, and branches. Furthermore in most of the cases there are superimposed structures in distributions (mosaic or island patterns) due to topography, soil variability, or even anthropogenic factors like past/present forest management activity. This multifaceted spatial context of the forests is relevant for many ecological issues, especially for maintaining forest biodiversity. Our aim in this study is twofold: (1) to quantify this structural variability laterally and vertically using lacunarity, and (2) to relate these results to relevant ecological features, i.e quantitatively described forest properties. Airborne LiDAR data of various quality and point density have been used for our study including a number of forested sites in Central and East Europe (partly Natura 2000 sites). The point clouds have been converted to voxel format and then converted to horizontal layers as images. These images were processed further for the lacunarity calculation. Areas of interest (AOIs) have been selected based on evaluation of the forested areas and auxiliary field information. The calculation has been performed for the AOIs for all available vertical data slices. The lacunarity function referring to a certain point and given vicinity varies horizontally and vertically, depending on the vegetation structure. Furthermore, the topography may also influence this property as the growth of plants, especially spacing and size of trees are influenced by the local topography and relief (e.g., slope, aspect). The comparisons of the flatland and hilly settings show interesting differences and the spatial patterns also vary differently. Because of the large amount of data resulting from these calculations, sophisticated methods are required to analyse the results. The large data amount then has been

  9. INTERSECTION DETECTION BASED ON QUALITATIVE SPATIAL REASONING ON STOPPING POINT CLUSTERS

    Directory of Open Access Journals (Sweden)

    S. Zourlidou

    2016-06-01

    Full Text Available The purpose of this research is to propose and test a method for detecting intersections by analysing collectively acquired trajectories of moving vehicles. Instead of solely relying on the geometric features of the trajectories, such as heading changes, which may indicate turning points and consequently intersections, we extract semantic features of the trajectories in form of sequences of stops and moves. Under this spatiotemporal prism, the extracted semantic information which indicates where vehicles stop can reveal important locations, such as junctions. The advantage of the proposed approach in comparison with existing turning-points oriented approaches is that it can detect intersections even when not all the crossing road segments are sampled and therefore no turning points are observed in the trajectories. The challenge with this approach is that first of all, not all vehicles stop at the same location – thus, the stop-location is blurred along the direction of the road; this, secondly, leads to the effect that nearby junctions can induce similar stop-locations. As a first step, a density-based clustering is applied on the layer of stop observations and clusters of stop events are found. Representative points of the clusters are determined (one per cluster and in a last step the existence of an intersection is clarified based on spatial relational cluster reasoning, with which less informative geospatial clusters, in terms of whether a junction exists and where its centre lies, are transformed in more informative ones. Relational reasoning criteria, based on the relative orientation of the clusters with their adjacent ones are discussed for making sense of the relation that connects them, and finally for forming groups of stop events that belong to the same junction.

  10. Rule-based topology system for spatial databases to validate complex geographic datasets

    Science.gov (United States)

    Martinez-Llario, J.; Coll, E.; Núñez-Andrés, M.; Femenia-Ribera, C.

    2017-06-01

    A rule-based topology software system providing a highly flexible and fast procedure to enforce integrity in spatial relationships among datasets is presented. This improved topology rule system is built over the spatial extension Jaspa. Both projects are open source, freely available software developed by the corresponding author of this paper. Currently, there is no spatial DBMS that implements a rule-based topology engine (considering that the topology rules are designed and performed in the spatial backend). If the topology rules are applied in the frontend (as in many GIS desktop programs), ArcGIS is the most advanced solution. The system presented in this paper has several major advantages over the ArcGIS approach: it can be extended with new topology rules, it has a much wider set of rules, and it can mix feature attributes with topology rules as filters. In addition, the topology rule system can work with various DBMSs, including PostgreSQL, H2 or Oracle, and the logic is performed in the spatial backend. The proposed topology system allows users to check the complex spatial relationships among features (from one or several spatial layers) that require some complex cartographic datasets, such as the data specifications proposed by INSPIRE in Europe and the Land Administration Domain Model (LADM) for Cadastral data.

  11. A spatial database for landslides in northern Bavaria: A methodological approach

    Science.gov (United States)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  12. Using a spatial and tabular database to generate statistics from terrain and spectral data for soil surveys

    Science.gov (United States)

    Horvath , E.A.; Fosnight, E.A.; Klingebiel, A.A.; Moore, D.G.; Stone, J.E.; Reybold, W.U.; Petersen, G.W.

    1987-01-01

    A methodology has been developed to create a spatial database by referencing digital elevation, Landsat multispectral scanner data, and digitized soil premap delineations of a number of adjacent 7.5-min quadrangle areas to a 30-m Universal Transverse Mercator projection. Slope and aspect transformations are calculated from elevation data and grouped according to field office specifications. An unsupervised classification is performed on a brightness and greenness transformation of the spectral data. The resulting spectral, slope, and aspect maps of each of the 7.5-min quadrangle areas are then plotted and submitted to the field office to be incorporated into the soil premapping stages of a soil survey. A tabular database is created from spatial data by generating descriptive statistics for each data layer within each soil premap delineation. The tabular data base is then entered into a data base management system to be accessed by the field office personnel during the soil survey and to be used for subsequent resource management decisions.Large amounts of data are collected and archived during resource inventories for public land management. Often these data are stored as stacks of maps or folders in a file system in someone's office, with the maps in a variety of formats, scales, and with various standards of accuracy depending on their purpose. This system of information storage and retrieval is cumbersome at best when several categories of information are needed simultaneously for analysis or as input to resource management models. Computers now provide the resource scientist with the opportunity to design increasingly complex models that require even more categories of resource-related information, thus compounding the problem.Recently there has been much emphasis on the use of geographic information systems (GIS) as an alternative method for map data archives and as a resource management tool. Considerable effort has been devoted to the generation of tabular

  13. Flexure-beam micromirror spatial light modulator devices for acquisition, tracking, and pointing

    Science.gov (United States)

    Rhoadarmer, Troy A.; Gustafson, Steven C.; Little, Gordon R.; Li, Tsen-Hwang

    1994-07-01

    The new flexure-beam micromirror (FBM) spatial light modulator devices developed by Texas Instruments Inc. have characteristics that enable superior acquisition, tracking, and pointing in communications and other applications. FBM devices can have tens of thousands of square micromirror elements, each as small as 20 microns on a side, each spaced relative to neighbors so that optical efficiency exceeds 90 percent, and each individually controlled with response times as small as 10 microseconds for piston-like motions that cover more than one-half optical wavelength. These devices may enable order-of-magnitude improvements in space-bandwidth product, efficiency, and speed relative to other spatial light modulator devices that could be used to generate arbitrary coherent light patterns in real time. However, the amplitude and phase of each mirror element cannot be specified separately because there is only one control voltage for each element. This issue can be addressed by adjusting the control voltages so that constructive and destructive interference in the coherent light reflected from many elements produces the desired far field coherent light pattern. Appropriate control voltages are best determined using a robust software optimization procedure such as simulated annealing. Simulated annealing yields excellent results, but it is not real time (it may require hours of execution time on workstation-class computers). An approach that permits real-time applications stores control voltages determined off-line by simulated annealing that produce key desired far field coherent light beam shapes. These stored results are then used as training data for radial basis function neural networks that interpolate in real time between the training cases.

  14. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Science.gov (United States)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  15. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons

  16. Spatial access method for urban geospatial database management: An efficient approach of 3D vector data clustering technique

    DEFF Research Database (Denmark)

    Azri, Suhaibah; Ujang, Uznir; Rahman, Alias Abdul

    2014-01-01

    In the last few years, 3D urban data and its information are rapidly increased due to the growth of urban area and urbanization phenomenon. These datasets are then maintain and manage in 3D spatial database system. However, performance deterioration is likely to happen due to the massiveness of 3D......, 3D R-Tree produces serious overlapping among nodes. The overlapping factor is important for an efficient 3D R-Tree to avoid replicated data entry in a different node. Thus, an efficient and reliable method is required to reduce the overlapping nodes in 3D R-Tree nodes. In this paper, we proposed a 3...

  17. Spatial Variability and Robust Interpolation of Seafloor Sediment Properties Using the SEABED Databases

    Science.gov (United States)

    2007-01-01

    fractal dimension. 2 Figure 2. Location of usSEABED records within the mid-Atlantic Bight, color coded by mean grain size, and...best-fit von Kármán model with noise spike is overlain (dashed), with parameter values as indicated. The fractal dimension of the model is 1.5...properties. Marine Geology 209, 147- 172. Jenkins, C.J., 1997. Building Offshore Soils Databases. Sea Technology 38, 25-28. Jenkins, C.J., 2002

  18. Linkage of the California Pesticide Use Reporting Database with Spatial Land Use Data for Exposure Assessment

    OpenAIRE

    Nuckols, John R.; Gunier, Robert B.; Riggs, Philip; Miller, Ryan; Reynolds, Peggy; Ward, Mary H.

    2007-01-01

    Background The State of California maintains a comprehensive Pesticide Use Reporting Database (CPUR). The California Department of Water Resources (CDWR) maps all crops in agricultural counties in California about once every 5 years. Objective We integrated crop maps with CPUR to more accurately locate where pesticides are applied and evaluated the effects for exposure assessment. Methods We mapped 577 residences and used the CPUR and CDWR data to compute two exposure metrics based on putativ...

  19. Spatial distribution of radionuclides in Lake Michigan biota near the Big Rock Point Nuclear Plant

    International Nuclear Information System (INIS)

    Wahlgren, M.A.; Yaguchi, E.M.; Nelson, D.M.; Marshall, J.S.

    1974-01-01

    A survey was made of four groups of biota in the vicinity of the Big Rock Point Nuclear Plant near Charlevoix, Michigan, to determine their usefulness in locating possible sources of plutonium and other radionuclides to Lake Michigan. This 70 MW boiling-water reactor, located on the Lake Michigan shoreline, was chosen because its fuel contains recycled plutonium, and because it routinely discharges very low-level radioactive wastes into the lake. Samples of crayfish (Orconectes sp.), green algae (Chara sp. and Cladophora sp.), and an aquatic macrophyte (Potamogeton sp.) were collected in August 1973, at varying distances from the discharge and analyzed for 239 240 Pu, 90 Sr, and five gamma-emitting radionuclides. Comparison samples of reactor waste solution have also been analyzed for these radionuclides. Comparisons of the spatial distributions of the extremely low radionuclide concentrations in biota clearly indicated that 137 Cs, 134 Cs, 65 Zn, and 60 Co were released from the reactor; their concentrations decreased exponentially with increasing distance from the discharge. Conversely, concentrations of 239 240 Pu, 95 Zr, and 90 Sr showed no correlation with distance, suggesting any input from Big Rock was insignificant with respect to the atmospheric origin of these isotopes. The significance of these results is discussed, particularly with respect to current public debate over the possibility of local environmental hazards associated with the use of plutonium as a nuclear fuel. (U.S.)

  20. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    Science.gov (United States)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  1. Investigating Spatial Patterns of Persistent Scatterer Interferometry Point Targets and Landslide Occurrences in the Arno River Basin

    Directory of Open Access Journals (Sweden)

    Ping Lu

    2014-07-01

    Full Text Available Persistent Scatterer Interferometry (PSI has been widely used for landslide studies in recent years. This paper investigated the spatial patterns of PSI point targets and landslide occurrences in the Arno River basin in Central Italy. The main purpose is to analyze whether spatial patterns of Persistent Scatterers (PS can be recognized as indicators of landslide occurrences throughout the whole basin. The bivariate K-function was employed to assess spatial relationships between PS and landslides. The PSI point targets were acquired from almost 4 years (from March 2003 to January 2007 of RADARSAT-1 images. The landslide inventory was collected from 15 years (from 1992–2007 of surveying and mapping data, mainly including remote sensing data, topographic maps and field investigations. The proposed approach is able to assess spatial patterns between a variety of PS and landslides, in particular, to understand if PSI point targets are spatially clustered (spatial attraction or randomly distributed (spatial independency on various types of landslides across the basin. Additionally, the degree and scale distances of PS clustering on a variety of landslides can be characterized. The results rejected the null hypothesis that PSI point targets appear to cluster similarly on four types of landslides (slides, flows, falls and creeps in the Arno River basin. Significant influence of PS velocities and acquisition orbits can be noticed on detecting landslides with different states of activities. Despite that the assessment may be influenced by the quality of landslide inventory and Synthetic Aperture Radar (SAR images, the proposed approach is expected to provide guidelines for studies trying to detect and investigate landslide occurrences at a regional scale through spatial statistical analysis of PS, for which an advanced understanding of the impact of scale distances on landslide clustering is fundamentally needed.

  2. Evidence of Territoriality and Species Interactions from Spatial Point-Pattern Analyses of Subarctic-Nesting Geese

    Science.gov (United States)

    Reiter, Matthew E.; Andersen, David E.

    2013-01-01

    Quantifying spatial patterns of bird nests and nest fate provides insights into processes influencing a species’ distribution. At Cape Churchill, Manitoba, Canada, recent declines in breeding Eastern Prairie Population Canada geese (Branta canadensis interior) has coincided with increasing populations of nesting lesser snow geese (Chen caerulescens caerulescens) and Ross’s geese (Chen rossii). We conducted a spatial analysis of point patterns using Canada goose nest locations and nest fate, and lesser snow goose nest locations at two study areas in northern Manitoba with different densities and temporal durations of sympatric nesting Canada and lesser snow geese. Specifically, we assessed (1) whether Canada geese exhibited territoriality and at what scale and nest density; and (2) whether spatial patterns of Canada goose nest fate were associated with the density of nesting lesser snow geese as predicted by the protective-association hypothesis. Between 2001 and 2007, our data suggest that Canada geese were territorial at the scale of nearest neighbors, but were aggregated when considering overall density of conspecifics at slightly broader spatial scales. The spatial distribution of nest fates indicated that lesser snow goose nest proximity and density likely influence Canada goose nest fate. Our analyses of spatial point patterns suggested that continued changes in the distribution and abundance of breeding lesser snow geese on the Hudson Bay Lowlands may have impacts on the reproductive performance of Canada geese, and subsequently the spatial distribution of Canada goose nests. PMID:24312520

  3. Specification of parameters for development of a spatial database for drought monitoring and famine early warning in the African Sahel

    Science.gov (United States)

    Rochon, Gilbert L.

    1989-01-01

    Parameters were described for spatial database to facilitate drought monitoring and famine early warning in the African Sahel. The proposed system, referred to as the African Drought and Famine Information System (ADFIS) is ultimately recommended for implementation with the NASA/FEMA Spatial Analysis and Modeling System (SAMS), a GIS/Dymanic Modeling software package, currently under development. SAMS is derived from FEMA'S Integration Emergency Management Information System (IEMIS) and the Pacific Northwest Laborotory's/Engineering Topographic Laboratory's Airland Battlefield Environment (ALBE) GIS. SAMS is primarily intended for disaster planning and resource management applications with the developing countries. Sources of data for the system would include the Developing Economics Branch of the U.S. Dept. of Agriculture, the World Bank, Tulane University School of Public Health and Tropical Medicine's Famine Early Warning Systems (FEWS) Project, the USAID's Foreign Disaster Assistance Section, the World Resources Institute, the World Meterological Institute, the USGS, the UNFAO, UNICEF, and the United Nations Disaster Relief Organization (UNDRO). Satellite imagery would include decadal AVHRR imagery and Normalized Difference Vegetation Index (NDVI) values from 1981 to the present for the African continent and selected Landsat scenes for the Sudan pilot study. The system is initially conceived for the MicroVAX 2/GPX, running VMS. To facilitate comparative analysis, a global time-series database (1950 to 1987) is included for a basic set of 125 socio-economic variables per country per year. A more detailed database for the Sahelian countries includes soil type, water resources, agricultural production, agricultural import and export, food aid, and consumption. A pilot dataset for the Sudan with over 2,500 variables from the World Bank's ANDREX system, also includes epidemiological data on incidence of kwashiorkor, marasmus, other nutritional deficiencies, and

  4. Mapping the impacts of thermoelectric power generation: a global, spatially explicit database

    Science.gov (United States)

    Raptis, Catherine; Pfister, Stephan

    2017-04-01

    Thermoelectric power generation is associated with environmental pressures resulting from emissions to air and water, as well as water consumption. The need to achieve global coverage in related studies has become pressing in view of climate change. At the same time, the ability to quantify impacts from power production on a high resolution remains pertinent, given their highly regionalized nature, particularly when it comes to water-related impacts. Efforts towards global coverage have increased in recent years, but most work on the impacts of global electricity production presents a coarse geographical differentiation. Over the past few years we have begun a concerted effort to create and make available a global georeferenced inventory of thermoelectric power plant operational characteristics and emissions, by modelling the relevant processes on the highest possible level: that of a generating unit. Our work extends and enhances a commercially available global power plant database, and so far includes: - Georeferencing the generating units and populating the gaps in their steam properties. - Identifying the cooling system for 92% of the global installed thermoelectric power capacity. - Using the completed steam property data, along with local environmental temperature data, to systematically solve the Rankine cycle for each generating unit, involving: i) distinguishing between simple, reheat, and cogenerative cycles, and accounting for particularities in nuclear power cycles; ii) accounting for the effect of different cooling systems (once-through, recirculating (wet tower), dry cooling) on the thermodynamic cycle. One of the direct outcomes of solving the Rankine cycle is the cycle efficiency, an indispensable parameter in any study related to power production, including the quantification of air emissions and water consumption. Another direct output, for those units employing once-through cooling, is the rate of heat rejection to water, which can lead to

  5. Study on parallel and distributed management of RS data based on spatial database

    Science.gov (United States)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  6. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  7. Spatial distribution patterns of plague hosts : point pattern analysis of the burrows of great gerbils in Kazakhstan

    NARCIS (Netherlands)

    Wilschut, Liesbeth I; Laudisoit, Anne; Hughes, Nelika K; Addink, Elisabeth A; de Jong, Steven M; Heesterbeek, Hans A P; Reijniers, Jonas; Eagle, Sally; Dubyanskiy, Vladimir M; Begon, Mike

    AIM: The spatial structure of a population can strongly influence the dynamics of infectious diseases, yet rarely is the underlying structure quantified. A case in point is plague, an infectious zoonotic disease caused by the bacterium Yersinia pestis. Plague dynamics within the Central Asian desert

  8. Linkage of the California Pesticide Use Reporting Database with spatial land use data for exposure assessment.

    Science.gov (United States)

    Nuckols, John R; Gunier, Robert B; Riggs, Philip; Miller, Ryan; Reynolds, Peggy; Ward, Mary H

    2007-05-01

    The State of California maintains a comprehensive Pesticide Use Reporting Database (CPUR). The California Department of Water Resources (CDWR) maps all crops in agricultural counties in California about once every 5 years. We integrated crop maps with CPUR to more accurately locate where pesticides are applied and evaluated the effects for exposure assessment. We mapped 577 residences and used the CPUR and CDWR data to compute two exposure metrics based on putative pesticide use within a 500-m buffer. For the CPUR metric, we assigned pesticide exposure to the residence proportionally for all square-mile Sections that intersected the buffer. For the CDWR metric, we linked CPUR crop-specific pesticide use to crops mapped within the buffer and assigned pesticide exposure. We compared the metrics for six pesticides: simazine, trifluralin (herbicides), dicofol, propargite (insecticides), methyl bromide, and metam sodium (fumigants). For all six pesticides we found good agreement (88-98%) as to whether the pesticide use was predicted. When we restricted the analysis to residences with reported pesticide use in Sections within 500 m, agreement was greatly reduced (35-58%). The CPUR metric estimates of pesticide use within 500 m were significantly higher than the CDWR metric for all six pesticides. Our findings may have important implications for exposure classification in epidemiologic studies of agricultural pesticide use using CPUR. There is a need to conduct environmental and biological measurements to ascertain which, if any, of these metrics best represent exposure.

  9. INHOMOGENEITY IN SPATIAL COX POINT PROCESSES – LOCATION DEPENDENT THINNING IS NOT THE ONLY OPTION

    Directory of Open Access Journals (Sweden)

    Michaela Prokešová

    2010-11-01

    Full Text Available In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process. This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.

  10. Spatial database for intersections.

    Science.gov (United States)

    2015-08-01

    Deciding which intersections in the state of Kentucky warrant safety improvements requires a comprehensive inventory : with information on every intersection in the public roadway network. The Kentucky Transportation Cabinet (KYTC) : had previously c...

  11. Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2015-10-01

    Full Text Available Periodic building change detection is important for many applications, including disaster management. Building map databases need to be updated based on detected changes so as to ensure their currency and usefulness. This paper first presents a graphical user interface (GUI developed to support the creation of a building database from building footprints automatically extracted from LiDAR (light detection and ranging point cloud data. An automatic building change detection technique by which buildings are automatically extracted from newly-available LiDAR point cloud data and compared to those within an existing building database is then presented. Buildings identified as totally new or demolished are directly added to the change detection output. However, for part-building demolition or extension, a connected component analysis algorithm is applied, and for each connected building component, the area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building-part. Using the developed GUI, a user can quickly examine each suggested change and indicate his/her decision to update the database, with a minimum number of mouse clicks. In experimental tests, the proposed change detection technique was found to produce almost no omission errors, and when compared to the number of reference building corners, it reduced the human interaction to 14% for initial building map generation and to 3% for map updating. Thus, the proposed approach can be exploited for enhanced automated building information updating within a topographic database.

  12. PERANCANGAN MODEL NETWORK PADA MESIN DATABASE NON SPATIAL UNTUK MANUVER JARINGAN LISTRIK SEKTOR DISTRIBUSI DENGAN PL SQ

    Directory of Open Access Journals (Sweden)

    I Made Sukarsa

    2009-06-01

    Full Text Available Saat ini aplikasi di bidang SIG telah banyak yang dikembangkan berbasis mesin DBMS (Database Management System non spatial sehingga mampu mendukung model penyajian data secara client server dan menangani data dalam jumlah yang besar. Salah satunya telah dikembangkan untuk menangani data jaringan listrik.Kenyataannya, mesin-mesin DBMS belum dilengkapi dengan kemampuan untuk melakukan analisis network seperti manuver jaringan dan merupakan dasar untuk pengembangan berbagai aplikasi lainnya. Oleh karena itu,perlu dikembangkan suatu model network untuk manuver jaringan listrik dengan berbagai kekhasannya.Melalui beberapa tahapan penelitian yang dilakukan, telah dapat dikembangkan suatu model network yangdapat digunakan untuk menangani manuver jaringan. Model ini dibangun dengan memperhatikan kepentingan pengintegrasian dengan sistem eksisting dengan meminimalkan adanya perubahan pada aplikasi eksisting.Pemilihan implementasi berbasis PL SQL (Pragrammable Language Structure Query Language akan memberikan berbagai keuntungan termasuk unjuk kerja sistem. Model ini telah diujikan untuk simulasi pemadaman,menghitung perubahan struktur pembebanan jaringan dan dapat dikembangkan untuk analisis sistem tenaga listrik seperti rugi-rugi, load flow dan sebagainya sehingga pada akhirnya aplikasi SIG akan mampu mensubstitusi danmengatasi kelemahan aplikasi analisis sistem tenaga yang banyak dipakai saat ini seperti EDSA (Electrical DesignSystem Anaysis .

  13. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    Science.gov (United States)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    LiDAR, also referred to as laser scanning, has proved to be an important tool for topographic data acquisition. Terrestrial laser scanning allows for accurate (several millimeter) and high resolution (several centimeter) data acquisition at distances of up to some hundred meters. By contrast, airborne laser scanning allows for acquiring homogeneous data for large areas, albeit with lower accuracy (decimeter) and resolution (some ten points per square meter) compared to terrestrial laser scanning. Hence, terrestrial laser scanning is preferably used for precise data acquisition of limited areas such as landslides or steep structures, while airborne laser scanning is well suited for the acquisition of topographic data of huge areas or even country wide. Laser scanners acquire more or less homogeneously distributed point clouds. These points represent natural objects like terrain and vegetation and artificial objects like buildings, streets or power lines. Typical products derived from such data are geometric models such as digital surface models representing all natural and artificial objects and digital terrain models representing the geomorphic topography only. As the LiDAR technology evolves, the amount of data produced increases almost exponentially even in smaller projects. This means a considerable challenge for the end user of the data: the experimenter has to have enough knowledge, experience and computer capacity in order to manage the acquired dataset and to derive geomorphologically relevant information from the raw or intermediate data products. Additionally, all this information might need to be integrated with other data like orthophotos. In all theses cases, in general, interactive interpretation is necessary to determine geomorphic structures from such models to achieve effective data reduction. There is little support for the automatic determination of characteristic features and their statistical evaluation. From the lessons learnt from automated

  14. Optimal estimation of the intensity function of a spatial point process

    DEFF Research Database (Denmark)

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...... the numerical solution of the Fredholm integral equation and note that a special case of the approximated solution is equivalent to a quasi-likelihood for binary spatial data. The practical performance of the optimal estimating function is evaluated in a simulation study and a data example....

  15. Size matters: point pattern analysis biases the estimation of spatial properties of stomata distribution.

    Science.gov (United States)

    Naulin, Paulette I; Valenzuela, Gerardo; Estay, Sergio A

    2017-03-01

    Stomata distribution is an example of biological patterning. Formal methods used to study stomata patterning are generally based on point-pattern analysis, which assumes that stomata are points and ignores the constraints imposed by size on the placement of neighbors. The inclusion of size in the analysis requires the use of a null model based on finite-size object geometry. In this study, we compare the results obtained by analyzing samples from several species using point and disc null models. The results show that depending on the null model used, there was a 20% reduction in the number of samples classified as uniform; these results suggest that stomata patterning is not as general as currently reported. Some samples changed drastically from being classified as uniform to being classified as clustered. In samples of Arabidopsis thaliana, only the disc model identified clustering at high densities of stomata. This reinforces the importance of selecting an appropriate null model to avoid incorrect inferences about underlying biological mechanisms. Based on the results gathered here, we encourage researchers to abandon point-pattern analysis when studying stomata patterning; more realistic conclusions can be drawn from finite-size object analysis. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  16. Spatial considerations of snow chemistry as a non-point contamination source in Alpine watersheds

    International Nuclear Information System (INIS)

    Elder, K.; Williams, M.; Dozier, J.

    1991-01-01

    Alpine watersheds act as a temporary storage basin for large volumes of precipitation as snow. Monitoring these basins for the presence and effects of acid precipitation is important because these areas are often weakly buffered and sensitive to acidification. Study of these sensitive areas may provide early detection of trends resulting form anthropogenic atmospheric inputs. In an intensive study of an alpine watershed in the Sierra Nevada in 1987 and 1988, the authors carefully monitored snow distribution and chemistry through space and time. They found that the volume-weighted mean ionic concentrations within the snowpack did not vary greatly over the basin at peak accumulation. However, the distribution of total snow water equivalence (SWE) was highly variable spatially. Coefficients of variation (CV) for SWE lead to a corresponding high spatial variance in the chemical loading of their study basin. Their results show that to obtain accurate estimates of chemical loading they must measure the chemical and physical snow parameters at a resolution proportional to their individual variances. It is therefore necessary to combine many SWE measurements with fewer carefully obtained chemistry measurements. They used a classification method based on physical parameters to partition the basin into similar zones for estimation of SWE distribution. This technique can also be used for sample design

  17. UAV-based detection and spatial analyses of periglacial landforms on Demay Point (King George Island, South Shetland Islands, Antarctica)

    Science.gov (United States)

    Dąbski, Maciej; Zmarz, Anna; Pabjanek, Piotr; Korczak-Abshire, Małgorzata; Karsznia, Izabela; Chwedorzewska, Katarzyna J.

    2017-08-01

    High-resolution aerial images allow detailed analyses of periglacial landforms, which is of particular importance in light of climate change and resulting changes in active layer thickness. The aim of this study is to show possibilities of using UAV-based photography to perform spatial analysis of periglacial landforms on the Demay Point peninsula, King George Island, and hence to supplement previous geomorphological studies of the South Shetland Islands. Photogrammetric flights were performed using a PW-ZOOM fixed-winged unmanned aircraft vehicle. Digital elevation models (DEM) and maps of slope and contour lines were prepared in ESRI ArcGIS 10.3 with the Spatial Analyst extension, and three-dimensional visualizations in ESRI ArcScene 10.3 software. Careful interpretation of orthophoto and DEM, allowed us to vectorize polygons of landforms, such as (i) solifluction landforms (solifluction sheets, tongues, and lobes); (ii) scarps, taluses, and a protalus rampart; (iii) patterned ground (hummocks, sorted circles, stripes, nets and labyrinths, and nonsorted nets and stripes); (iv) coastal landforms (cliffs and beaches); (v) landslides and mud flows; and (vi) stone fields and bedrock outcrops. We conclude that geomorphological studies based on commonly accessible aerial and satellite images can underestimate the spatial extent of periglacial landforms and result in incomplete inventories. The PW-ZOOM UAV is well suited to gather detailed geomorphological data and can be used in spatial analysis of periglacial landforms in the Western Antarctic Peninsula region.

  18. Point-of-Care Healthcare Databases Are an Overall Asset to Clinicians, but Different Databases May Vary in Usefulness Based on Personal Preferences. A Review of: Chan, R. & Stieda, V. (2011. Evaluation of three point-of-care healthcare databases: BMJ Point-of-Care, Clin-eguide and Nursing Reference Centre. Health and Information Libraries Journal, 28(1, 50-58. doi: 10.1111/j.1471-1842.2010.00920.x

    Directory of Open Access Journals (Sweden)

    Carol D. Howe

    2011-01-01

    Full Text Available Objective – To evaluate the usefulness of three point-of-care healthcare databases (BMJ Point-of-Care, Clin-eguide, and Nursing Reference Centre in clinical practice.Design – A descriptive study analyzing questionnaire results.Setting – Hospitals within Alberta, Canada’s two largest health regions (at the time of this study, with a third health region submitting a small number of responses.Subjects – A total of 46 Alberta hospital personnel answered the questionnaire, including 19 clinicians, 7 administrators, 6 nurses, 1 librarian, 1 preceptor, and “some” project coordinators. Subjects were chosen using a non-probability sampling method.Methods – The researchers developed an online questionnaire consisting of 17 questions and posted it on the University of Calgary’s Health Sciences Library and the Health Knowledge Network websites. The questions, in general, asked respondents how easy the databases were to search and use, whether the database content answered their clinical questions, and whether they would recommend the databases for future purchase. Most questions required a response for each of the three databases. The researchers collected quantitative data by using a Likert scale from 1 to 5, with 5 being the most positive answer and 1 being the most negative. They collected qualitative data by asking open-ended questions.Main Results – With regard to ease of searching, BMJ Point-of-Care (BMJ received the greatest number of responses (71% at level 5. A smaller number of respondents (56% rated Nursing Reference Centre (NRC at level 5. Clin-eguide received 59% of the responses at level 5, but it also received the greatest number of responses at the next highest level (level 4. Respondents rated all three databases similarly with regard to levels 1 and 2.Regarding how easy the resources were to learn, most respondents rated all three databases as easy to learn (BMJ, 77%; Clin-eguide, 72%; and NRC, 68%. Very few respondents

  19. [Spatial discharge characteristics and total load control of non-point source pollutants based on the catchment scale].

    Science.gov (United States)

    Wang, Xia-Hui; Lu, Jun; Zhang, Qing-Zhong; Wang, Bo; Yao, Rui-Hua; Zhang, Hui-Yuan; Huang, Feng

    2011-09-01

    Agricultural non-point source pollution is one of the major causes of water quality deterioration. Based on the analysis of the spatial discharge characteristics and intensity of major pollutants from the agricultural pollution source, the establishment of spatial management subzones for controlling agricultural non-point pollution and a design of a plan for total load control of pollutants from each subzone is an important way to improve the efficiency of control measures. In this paper the Four Lake basin in Hubei Province is adopted as the research case region and a systematic research of the control countermeasures of agricultural non-point pollution based on the catchment scale is carried out. The results shows that in the Four Lake basin, the COD, total nitrogen, total phosphorus and ammonia nitrogen load of the water environment are mainly caused by agricultural non-point pollution. These four kinds of non-point source pollutants respectively account for 67.6%, 82.2%, 84.7% and 50.9% of the total pollutant discharge amount in the basin. The analysis of the spatial discharge characteristics of non-point source pollutants in the Four Lake basin shows that the major contributor source regions of non-point source pollutant in the basin are the four counties, including Honghu, Jianli, Qianjiang and Shayang where the aquatic and livestock production are relatively developed. According to the spatial discharge characteristics of the pollutants and the evaluation of the discharge intensity of pollutants, the Four Lake basin is divided into three agricultural non-point pollution management subzones, which including Changhu upstream aquatic and livestock production pollution control subzone, Four-lake trunk canal rural non-point source pollution control subzone and Honghu aquatic production pollution control subzone. Specific pollution control measures are put forward for each subzone. With a comprehensive consideration of the water quality amelioration and the

  20. Spatial reasoning with augmented points: Extending cardinal directions with local distances

    Directory of Open Access Journals (Sweden)

    Reinhard Moratz

    2012-12-01

    Full Text Available We present an approach for supplying existing qualitative direction calculi with a distance component to support fully fledged positional reasoning. The general underlying idea of augmenting points with local reference properties has already been applied in the OPRAm calculus. In this existing calculus, point objects are attached with a local reference direction to obtain oriented points and able to express relative direction using binary relations. We show how this approach can be extended to attach a granular distance concept to direction calculi such as the cardinal direction calculus or adjustable granularity calculi such as OPRAm or the Star calculus. We focus on the cardinal direction calculus and extend it to a multi-granular positional calculus called EPRAm. We provide a formal specification of EPRAm including a composition table for EPRA2 automatically determined using real algebraic geometry. We also report on an experimental performance analysis of EPRA2 in the context of a topological map-learning task proposed for benchmarking qualitative calculi. Our results confirm that our approach of adding a relative distance component to existing calculi improves the performance in realistic tasks when using algebraic closure for consistency checking.

  1. Application of portable gas detector in point and scanning method to estimate spatial distribution of methane emission in landfill.

    Science.gov (United States)

    Lando, Asiyanthi Tabran; Nakayama, Hirofumi; Shimaoka, Takayuki

    2017-01-01

    Methane from landfills contributes to global warming and can pose an explosion hazard. To minimize these effects emissions must be monitored. This study proposed application of portable gas detector (PGD) in point and scanning measurements to estimate spatial distribution of methane emissions in landfills. The aims of this study were to discover the advantages and disadvantages of point and scanning methods in measuring methane concentrations, discover spatial distribution of methane emissions, cognize the correlation between ambient methane concentration and methane flux, and estimate methane flux and emissions in landfills. This study was carried out in Tamangapa landfill, Makassar city-Indonesia. Measurement areas were divided into basic and expanded area. In the point method, PGD was held one meter above the landfill surface, whereas scanning method used a PGD with a data logger mounted on a wire drawn between two poles. Point method was efficient in time, only needed one person and eight minutes in measuring 400m 2 areas, whereas scanning method could capture a lot of hot spots location and needed 20min. The results from basic area showed that ambient methane concentration and flux had a significant (pdistribution of methane emissions in the expanded area by using Kriging method. The average of estimated flux from scanning method was 71.2gm -2 d -1 higher than 38.3gm -2 d -1 from point method. Further, scanning method could capture the lower and higher value, which could be useful to evaluate and estimate the possible effects of the uncontrolled emissions in landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Spatial and Temporal Assessment of Non-Point Groundwater Pollution Sources, Tutuila Island, American Samoa

    Science.gov (United States)

    Shuler, C. K.; El-Kadi, A. I.; Dulaiova, H.; Glenn, C. R.; Fackrell, J.

    2015-12-01

    The quality of municipal groundwater supplies on Tutuila, the main island in American Samoa, is currently in question. A high vulnerability for contamination from surface activities has been recognized, and there exists a strong need to clearly identify anthropogenic sources of pollution and quantify their influence on the aquifer. This study examines spatial relationships and time series measurements of nutrients and other tracers to identify predominant pollution sources and determine the water quality impacts of the island's diverse land uses. Elevated groundwater nitrate concentrations are correlated with areas of human development, however, the mixture of residential and agricultural land use in this unique village based agrarian setting makes specific source identification difficult using traditional geospatial analysis. Spatial variation in anthropogenic impact was assessed by linking NO3- concentrations and δ15N(NO3) from an extensive groundwater survey to land-use types within well capture zones and groundwater flow-paths developed with MODFLOW, a numerical groundwater model. Land use types were obtained from high-resolution GIS data and compared to water quality results with multiple-regression analysis to quantify the impact that different land uses have on water quality. In addition, historical water quality data and new analyses of δD and δ18O in precipitation, groundwater, and mountain-front recharge waters were used to constrain the sources and mechanisms of contamination. Our analyses indicate that groundwater nutrient levels on Tutuila are controlled primarily by residential, not agricultural activity. Also a lack of temporal variation suggests that episodic pollution events are limited to individual water sources as opposed to the entire aquifer. These results are not only valuable for water quality management on Tutuila, but also provide insight into the sustainability of groundwater supplies on other islands with similar hydrogeology and land

  3. Spatial point pattern analysis of human settlements and geographical associations in eastern coastal China - a case study.

    Science.gov (United States)

    Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping

    2014-03-10

    Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley's K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning.

  4. Spatial Point Pattern Analysis of Human Settlements and Geographical Associations in Eastern Coastal China — A Case Study

    Science.gov (United States)

    Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping

    2014-01-01

    Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley’s K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning. PMID:24619117

  5. Spatial Point Pattern Analysis of Human Settlements and Geographical Associations in Eastern Coastal China — A Case Study

    Directory of Open Access Journals (Sweden)

    Zhonghao Zhang

    2014-03-01

    Full Text Available Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS approach, Ripley’s K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning.

  6. Tight-coupling of groundwater flow and transport modelling engines with spatial databases and GIS technology: a new approach integrating Feflow and ArcGIS

    Directory of Open Access Journals (Sweden)

    Ezio Crestaz

    2012-09-01

    Full Text Available Implementation of groundwater flow and transport numerical models is generally a challenge, time-consuming and financially-demanding task, in charge to specialized modelers and consulting firms. At a later stage, within clearly stated limits of applicability, these models are often expected to be made available to less knowledgeable personnel to support/design and running of predictive simulations within more familiar environments than specialized simulation systems. GIS systems coupled with spatial databases appear to be ideal candidates to address problem above, due to their much wider diffusion and expertise availability. Current paper discusses the issue from a tight-coupling architecture perspective, aimed at integration of spatial databases, GIS and numerical simulation engines, addressing both observed and computed data management, retrieval and spatio-temporal analysis issues. Observed data can be migrated to the central database repository and then used to set up transient simulation conditions in the background, at run time, while limiting additional complexity and integrity failure risks as data duplication during data transfer through proprietary file formats. Similarly, simulation scenarios can be set up in a familiar GIS system and stored to spatial database for later reference. As numerical engine is tightly coupled with the GIS, simulations can be run within the environment and results themselves saved to the database. Further tasks, as spatio-temporal analysis (i.e. for postcalibration auditing scopes, cartography production and geovisualization, can then be addressed using traditional GIS tools. Benefits of such an approach include more effective data management practices, integration and availability of modeling facilities in a familiar environment, streamlining spatial analysis processes and geovisualization requirements for the non-modelers community. Major drawbacks include limited 3D and time-dependent support in

  7. Local correlated sampling Monte Carlo calculations in the TFM neutronics approach for spatial and point kinetics applications

    Directory of Open Access Journals (Sweden)

    Laureau Axel

    2017-01-01

    Full Text Available These studies are performed in the general framework of transient coupled calculations with accurate neutron kinetics models. This kind of application requires a modeling of the influence on the neutronics of the macroscopic cross-section evolution. Depending on the targeted accuracy, this feedback can be limited to the reactivity for point kinetics, or can take into account the redistribution of the power in the core for spatial kinetics. The local correlated sampling technique for Monte Carlo calculation presented in this paper has been developed for this purpose, i.e. estimating the influence on the neutron transport of a local variation of different parameters such as sodium density or fuel Doppler effect. This method is associated to an innovative spatial kinetics model named Transient Fission Matrix, which condenses the time-dependent Monte Carlo neutronic response in Green functions. Finally, an accurate estimation of the feedback effects on these Green functions provides an on-the-fly prediction of the flux redistribution in the core, whatever the actual perturbation shape is during the transient. This approach is also used to estimate local feedback effects for point kinetics resolution.

  8. Assessing landscape and contaminant point-sources as spatial determinants of water quality in the Vermilion River System, Ontario, Canada.

    Science.gov (United States)

    Strangway, Carrie; Bowman, Michelle F; Kirkwood, Andrea E

    2017-10-01

    The Vermilion River and major tributaries (VRMT) are located in the Vermilion watershed (4272 km 2 ) in north-central Ontario, Canada. This watershed not only is dominated by natural land-cover but also has a legacy of mining and other development activities. The VRMT receive various point (e.g., sewage effluent) and non-point (e.g., mining activity runoff) inputs, in addition to flow regulation features. Further development in the Vermilion watershed has been proposed, raising concerns about cumulative impacts to ecosystem health in the VRMT. Due to the lack of historical assessments on riverine-health in the VRMT, a comprehensive suite of water quality parameters was collected monthly at 28 sites during the ice-free period of 2013 and 2014. Canadian water quality guidelines and objectives were not met by an assortment of water quality parameters, including nutrients and metals. This demonstrates that the VRMT is an impacted system with several pollution hotspots, particularly downstream of wastewater treatment facilities. Water quality throughout the river system appeared to be influenced by three distinct land-cover categories: forest, barren, and agriculture. Three spatial pathway models (geographical, topographical, and river network) were employed to assess the complex interactions between spatial pathways, stressors, and water quality condition. Topographical landscape analyses were performed at five different scales, where the strongest relationships between water quality and land-use occurred at the catchment scale. Sites on the main stem of Junction Creek, a tributary impacted by industrial and urban development, had above average concentrations for the majority of water quality parameters measured, including metals and nitrogen. The river network pathway (i.e., asymmetric eigenvector map (AEM)) and topographical feature (i.e., catchment land-use) models explained most of the variation in water quality (62.2%), indicating that they may be useful tools in

  9. Improving a spatial rainfall product using multiple-point geostatistical simulations and its effect on a national hydrological model.

    Science.gov (United States)

    Oriani, F.; Stisen, S.

    2016-12-01

    Rainfall amount is one of the most sensitive inputs to distributed hydrological models. Its spatial representation is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the 10-km-grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network in recent years (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. Consequently, the related hydrological model shows a significantly lower prediction power. To give a better estimation of spatial rainfall at the grid points far from ground measurements, we use the direct sampling technique (DS) [1], belonging to the family of multiple-point geostatistics. DS, already applied to rainfall and spatial variable estimation [2, 3], simulates a grid value by sampling a training data set where a similar data neighborhood occurs. In this way, complex statistical relations are preserved by generating similar spatial patterns to the ones found in the training data set. Using the reliable grid product from the period 1996-2006 as training data set, we first test the technique by simulating part of this data set, then we apply the technique to the grid product of the period 2007-2014, and subsequently analyzing the uncertainty propagation to the hydrological model. We show that DS can improve the reliability of the rainfall product by generating more realistic rainfall patterns, with a significant repercussion on the hydrological model. The reduction of rain gauge networks is a global phenomenon which has huge implications for hydrological model performance and the uncertainty assessment of water resources. Therefore, the presented methodology can potentially be used in many regions where

  10. Spatial and temporal characterization of SCIAMACHY limb pointing errors during the first three years of the mission

    Directory of Open Access Journals (Sweden)

    C. von Savigny

    2005-01-01

    Full Text Available Limb scattering retrievals of atmospheric minor constituent profiles require highly accurate knowledge of the tangent heights during the measurements. The limb scattering measurements of the Scanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY on Envisat are affected by tangent height errors of up to 2 km. This contribution provides a summary of the temporal and spatial variation of the SCIAMACHY limb pointing errors during the first three years of the SCIAMACHY mission. The tangent height errors are retrieved from the limb measurements in the UV-B spectral range. A seasonal modulation of the monthly mean tangent height offsets is identified with amplitudes of 800m (220m before (after the improvement of the Envisat orbit propagator model in December 2003. Even after the December 2003 orbit model improvement a constant offset component of about 1km is present. Furthermore, pointing discontinuities are identified that coincide with the daily updates of the on-board orbit propagator model. In order to reduce the errors in ozone profile retrievals caused by pointing errors to less than 5%, the tangent heights have to be known to within 250m.

  11. Predicting wildfire occurrence distribution with spatial point process models and its uncertainty assessment: a case study in the Lake Tahoe Basin, USA

    Science.gov (United States)

    Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner

    2015-01-01

    Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...

  12. Producing Distribution Maps for a Spatially-Explicit Ecosystem Model Using Large Monitoring and Environmental Databases and a Combination of Interpolation and Extrapolation

    Directory of Open Access Journals (Sweden)

    Arnaud Grüss

    2018-01-01

    Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by

  13. Quantitative assessment of spatial sound distortion by the semi-ideal recording point of a hear-through device

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Christensen, Flemming; Hammershøi, Dorte

    2013-01-01

    A hear-through device combines a microphone and earphone in an earpiece so that when worn, one per ear, it can work as an acoustically transparent system allowing for simultaneous individual binaural recording and playback of the real sound field at the ears. Recognizing the blocked entrance...... to the ear canal as the ideal recording point – i.e. all directional properties of the incident sound field are recorded without distortion - it is critical for such device to be sufficiently small so that it can be completely inserted into the ear canal. This is not always feasible and the device may...... stretch out from the ideal position and thus distort the captured spatial information. Here we present measurements that quantify by how much the directional properties of the sound field are distorted by semi-ideal hear-through prototypes built by mounting miniature microphones on the outer part...

  14. CAE meteorological database for the PC CREAM program. Atmospheric dilution factor in different points of the CAE (Centro Atomico Ezeiza) and of the argentine nuclear power plants

    International Nuclear Information System (INIS)

    Amado, Valeria A.

    2007-01-01

    In the first part of this work, the EZEIZA.MET file, with the meteorological database of the surroundings of the Ezeiza Atomic Center, is prepared and incorporated into the library of the PC CREAM program. This program was developed by the National Radiological Protection Board and the European Union. Information provided by the National Meteorological Service was used, corresponding to the Ezeiza Meteorological Station during the period 1996-2005. In the second part, a methodology to estimate the atmospheric dilution factor at a point using the PLUME module of the PC CREAM, is presented. The developed methodology was used to estimate the dilution factor at points close to the Ezeiza Atomic Center and nuclear power plants Atucha I and Embalse. The developed methodology was used to estimate the dilution factor at points close to the Ezeiza Atomic Center and nuclear power plants Atucha I and Embalse. In the first case the file with the generated meteorological database is used, whereas for the nuclear power plants the already existing ATUCHALO.MET and EMBALSE.MET files are used. The dilution factors obtained are compared with those obtained in previous work. The proposed methodology is a useful tool to estimate the dilution factors in a simple and systematic way, and simultaneously allows the update of the meteorological information used in the estimations. (author) [es

  15. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    Directory of Open Access Journals (Sweden)

    M. Wang

    2015-05-01

    Full Text Available The conservation of drinking water source reservoirs has a close relationship between regional economic development and people’s livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN and total phosphorus (TP. The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  16. Spatial resolution and image qualities of Zr-89 on Siemens Biograph TruePoint PET/CT.

    Science.gov (United States)

    Lee, Young Sub; Kim, Jin Su; Kim, Jung Young; Kim, Byung Il; Lim, Sang Moo; Kim, Hee-Joung

    2015-02-01

    Zirconium-89 (t(1/2)=78.41 hours) is an ideal metallic radioisotope for immuno-positron emission tomography (PET), given that its physical half-life closely matches the biological half-life of monoclonal antibodies. In this study, the authors measured the spatial resolution and image quality of Zr-89 PET and compared the results against those obtained using F-18 PET, which is widely regarded as the gold standard for comparison of imaging characteristics. The spatial resolution and image qualities of Zr-89 were measured on the Siemens Biograph Truepoint TrueV PET/CT scanner, partly according to NEMA NU2-2007 standards. For spatial resolution measurement, the Zr-89 point source was located at the center of the axial field of view (FOV) and offset 1/4 axial FOV from the center. For image quality measurements, an NEMA IEC Phantom was used. The NEMA IEC Phantom consists of six hot spheres that were filled with Zr-89 solution. Spatial resolution and image quality (%contrast, %background variability [BV], and source to background ratio [SBR]) were assessed to compare the imaging characteristics of F-18 with those of Siemens Biograph Truepoint TrueV. The transverse and axial spatial resolutions at 1 cm were 4.5 and 4.7 mm for Zr-89, respectively. The %contrast of Zr-89 was 25.5% for the smallest 10 mm sized sphere and 89.8% for the largest 37 mm sized sphere, and for F-18, it was 32.5% for the smallest 10 mm sized sphere and 103.9% for the largest 37 mm sized sphere using the ordered subset expectation maximization (OSEM) reconstruction method. The %BV of F-18 PET was 6.4% for the smallest 10 mm sized sphere and 3.5% for the largest 37 mm sized sphere using the OSEM reconstruction. The SBR of Zr-89 was 1.8 for the smallest 10 mm sized sphere and 3.7 for the largest 37 mm sized sphere, and for F-18, it was 2.0 for the smallest 10 mm sized sphere and 4.1 for the largest 37 mm sized sphere using the OSEM reconstruction method. This study assessed Zr-89

  17. Spatial analysis and mapping of malaria risk in Malawi using point-referenced prevalence of infection data

    Directory of Open Access Journals (Sweden)

    Kazembe Lawrence N

    2006-09-01

    Full Text Available Abstract Background Current malaria control initiatives aim at reducing malaria burden by half by the year 2010. Effective control requires evidence-based utilisation of resources. Characterizing spatial patterns of risk, through maps, is an important tool to guide control programmes. To this end an analysis was carried out to predict and map malaria risk in Malawi using empirical data with the aim of identifying areas where greatest effort should be focussed. Methods Point-referenced prevalence of infection data for children aged 1–10 years were collected from published and grey literature and geo-referenced. The model-based geostatistical methods were applied to analyze and predict malaria risk in areas where data were not observed. Topographical and climatic covariates were added in the model for risk assessment and improved prediction. A Bayesian approach was used for model fitting and prediction. Results Bivariate models showed a significant association of malaria risk with elevation, annual maximum temperature, rainfall and potential evapotranspiration (PET. However in the prediction model, the spatial distribution of malaria risk was associated with elevation, and marginally with maximum temperature and PET. The resulting map broadly agreed with expert opinion about the variation of risk in the country, and further showed marked variation even at local level. High risk areas were in the low-lying lake shore regions, while low risk was along the highlands in the country. Conclusion The map provided an initial description of the geographic variation of malaria risk in Malawi, and might help in the choice and design of interventions, which is crucial for reducing the burden of malaria in Malawi.

  18. Combining neural network models to predict spatial patterns of airborne pollutant accumulation in soils around an industrial point emission source.

    Science.gov (United States)

    Dimopoulos, Ioannis F; Tsiros, Ioannis X; Serelis, Konstantinos; Chronopoulou, Aikaterini

    2004-12-01

    Neural networks (NNs) have the ability to model a wide range of complex nonlinearities. A major disadvantage of NNs, however, is their instability, especially under conditions of sparse, noisy, and limited data sets. In this paper, different combining network methods are used to benefit from the existence of local minima and from the instabilities of NNs. A nonlinear k-fold cross-validation method is used to test the performance of the various networks and also to develop and select a set of networks that exhibits a low correlation of errors. The various NN models are applied to estimate the spatial patterns of atmospherically transported and deposited lead (Pb) in soils around an historical industrial air emission point source. It is shown that the resulting ensemble networks consistently give superior predictions compared with the individual networks because, for the ensemble networks, R2 values were found to be higher than 0.9 while, for the contributing individual networks, values for R2 ranged between 0.35 and 0.85. It is concluded that combining networks can be adopted as an important component in the application of artificial NN techniques in applied air quality studies.

  19. Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams

    Science.gov (United States)

    Daniel J. Isaak; Jay M. Ver Hoef; Erin E. Peterson; Dona L. Horan; David E. Nagel

    2017-01-01

    Population size estimates for stream fishes are important for conservation and management, but sampling costs limit the extent of most estimates to small portions of river networks that encompass 100s–10 000s of linear kilometres. However, the advent of large fish density data sets, spatial-stream-network (SSN) models that benefit from nonindependence among samples,...

  20. Patches structure succession based on spatial point pattern features in semi-arid ecosystems of the water-wind erosion crisscross region

    Directory of Open Access Journals (Sweden)

    Hong-Min Hao

    2017-10-01

    Full Text Available Spatial point-pattern analysis can give insights to the underlying processes of patch succession and restoration. It is unclear whether inter-shrub competition determines patch succession. In this paper, we assessed the spatial patterns along patch succession using spatial statistics such as univariate and bivariate O-ring statistics, in the water-wind erosion crisscross region in semi-arid ecosystems of the Loess Plateau. Point pattern analysis results showed that there were no significant difference in three positions of the slope. The small and middle shrub patches were aggregatedly distributed in small spatial scale, meanwhile the large shrub patches were regularly distributed and dead shrub patches were randomly distributed. The small shrub patches were respectively aggregated to the middle and large patches at fine scales. Competition-induced regular distribution or negative relationship becomes obvious when analyzing the shift towards less aggregated perceptible effect of competition, a time component should always be included in spatial pattern-based inference of competition. Our results revealed that regular, clumped and random shrub patch patterns could occur, pending on size of shrub patches, and the shrub patches are distributed in different ways and they can present variant spatial point pattern features along patch size succession.

  1. Reliability of the two-point measurement of the spatial correlation length from Gaussian-shaped fluctuating signals in fusion-grade plasmas

    Science.gov (United States)

    Kim, Jaewook; Nam, Y. U.; Lampert, M.; Ghim, Y.-C.

    2016-10-01

    A statistical method for the estimation of the spatial correlation lengths of Gaussian-shaped fluctuating signals with two measurement points is examined to quantitatively evaluate its reliability (variance) and accuracy (bias error). The standard deviation of the correlation value is analytically derived for randomly distributed Gaussian shaped fluctuations satisfying stationarity and homogeneity, allowing us to evaluate, as a function of fluctuation-to-noise ratios, the sizes of averaging time windows and the ratios of the distance between the two measurement points to the true correlation length, and the goodness of the two-point measurement for estimating the spatial correlation length. Analytic results are confirmed with numerically generated synthetic data and real experimental data obtained with the KSTAR beam emission spectroscopy diagnostic. Our results can be applied to Gaussian-shaped fluctuating signals where a correlation length must be measured with only two measurement points.

  2. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection

    Directory of Open Access Journals (Sweden)

    Toofanny Rudesh D

    2011-08-01

    Full Text Available Abstract Background Molecular dynamics (MD simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster. For a 'full' simulation trajectory (51 ns spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster. Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36% was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery

  3. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  4. Spatial distribution of clinical computer systems in primary care in England in 2016 and implications for primary care electronic medical record databases: a cross-sectional population study.

    Science.gov (United States)

    Kontopantelis, Evangelos; Stevens, Richard John; Helms, Peter J; Edwards, Duncan; Doran, Tim; Ashcroft, Darren M

    2018-02-28

    UK primary care databases (PCDs) are used by researchers worldwide to inform clinical practice. These databases have been primarily tied to single clinical computer systems, but little is known about the adoption of these systems by primary care practices or their geographical representativeness. We explore the spatial distribution of clinical computing systems and discuss the implications for the longevity and regional representativeness of these resources. Cross-sectional study. English primary care clinical computer systems. 7526 general practices in August 2016. Spatial mapping of family practices in England in 2016 by clinical computer system at two geographical levels, the lower Clinical Commissioning Group (CCG, 209 units) and the higher National Health Service regions (14 units). Data for practices included numbers of doctors, nurses and patients, and area deprivation. Of 7526 practices, Egton Medical Information Systems (EMIS) was used in 4199 (56%), SystmOne in 2552 (34%) and Vision in 636 (9%). Great regional variability was observed for all systems, with EMIS having a stronger presence in the West of England, London and the South; SystmOne in the East and some regions in the South; and Vision in London, the South, Greater Manchester and Birmingham. PCDs based on single clinical computer systems are geographically clustered in England. For example, Clinical Practice Research Datalink and The Health Improvement Network, the most popular primary care databases in terms of research outputs, are based on the Vision clinical computer system, used by <10% of practices and heavily concentrated in three major conurbations and the South. Researchers need to be aware of the analytical challenges posed by clustering, and barriers to accessing alternative PCDs need to be removed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Combined point and distributed techniques for multidimensional estimation of spatial groundwater-stream water exchange in a heterogeneous sand bed-stream.

    Science.gov (United States)

    Gaona Garcia, J.; Lewandowski, J.; Bellin, A.

    2017-12-01

    Groundwater-stream water interactions in rivers determine water balances, but also chemical and biological processes in the streambed at different spatial and temporal scales. Due to the difficult identification and quantification of gaining, neutral and losing conditions, it is necessary to combine techniques with complementary capabilities and scale ranges. We applied this concept to a study site at the River Schlaube, East Brandenburg-Germany, a sand bed stream with intense sediment heterogeneity and complex environmental conditions. In our approach, point techniques such as temperature profiles of the streambed together with vertical hydraulic gradients provide data for the estimation of fluxes between groundwater and surface water with the numerical model 1DTempPro. On behalf of distributed techniques, fiber optic distributed temperature sensing identifies the spatial patterns of neutral, down- and up-welling areas by analysis of the changes in the thermal patterns at the streambed interface under certain flow. The study finally links point and surface temperatures to provide a method for upscaling of fluxes. Point techniques provide point flux estimates with essential depth detail to infer streambed structures while the results hardly represent the spatial distribution of fluxes caused by the heterogeneity of streambed properties. Fiber optics proved capable of providing spatial thermal patterns with enough resolution to observe distinct hyporheic thermal footprints at multiple scales. The relation of thermal footprint patterns and temporal behavior with flux results from point techniques enabled the use of methods for spatial flux estimates. The lack of detailed information of the physical driver's spatial distribution restricts the spatial flux estimation to the application of the T-proxy method, whose highly uncertain results mainly provide coarse spatial flux estimates. The study concludes that the upscaling of groundwater-stream water interactions using

  6. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  7. Score, pseudo-score and residual diagnostics for goodness-of-fit of spatial point process models

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Rubak, Ege H.; Møller, Jesper

    theoretical support to the established practice of using functional summary statistics such as Ripley’s K-function, when testing for complete spatial randomness; and they provide new tools such as the compensator of the K-function for testing other fitted models. The results also support localisation methods...

  8. Spatial optimization of operationally relevant large fire confine and point protection strategies: Model development and test cases

    Science.gov (United States)

    Yu Wei; Matthew P. Thompson; Jessica R. Haas; Gregory K. Dillon; Christopher D. O’Connor

    2018-01-01

    This study introduces a large fire containment strategy that builds upon recent advances in spatial fire planning, notably the concept of potential wildland fire operation delineations (PODs). Multiple PODs can be clustered together to form a “box” that is referred as the “response POD” (or rPOD). Fire lines would be built along the boundary of an rPOD to contain a...

  9. Spatially resolved synchrotron-induced X-ray fluorescence analyses of metal point drawings and their mysterious inscriptions

    International Nuclear Information System (INIS)

    Reiche, Ina; Radtke, Martin; Berger, Achim; Goerner, Wolf; Ketelsen, Thomas; Merchel, Silke; Riederer, Josef; Riesemeier, Heinrich; Roth, Michael

    2004-01-01

    Synchrotron-induced X-ray fluorescence (Sy-XRF) analysis was used to study the chemical composition of precious Renaissance silverpoint drawings. Drawings by famous artists such as Albrecht Duerer (1471-1528) and Jan van Eyck (approximately 1395-1441) must be investigated non-destructively. Moreover, extremely sensitive synchrotron- or accelerator-based techniques are needed since only small quantities of silver are deposited on the paper. New criteria for attributing these works to a particular artist could be established based on the analysis of the chemical composition of the metal points used. We illustrate how analysis can give new art historical information by means of two case studies. Two particular drawings, one of Albrecht Duerer, showing a profile portrait of his closest friend, 'Willibald Pirckheimer' (1503), and a second one attributed to Jan van Eyck, showing a 'Portrait of an elderly man', often named 'Niccolo Albergati', are the object of intense art historical controversy. Both drawings show inscriptions next to the figures. Analyses by Sy-XRF could reveal the same kind of silverpoint for the Pirckheimer portrait and its mysterious Greek inscription, contrary to the drawing by Van Eyck where at least three different metal points were applied. Two different types of silver marks were found in this portrait. Silver containing gold marks were detected in the inscriptions and over-subscriptions. This is the first evidence of the use of gold points for metal point drawings in the Middle Ages

  10. Quantitative assessment of spatial sound distortion by the semi-ideal recording point of a hear-through device

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Christensen, Flemming; Hammershøi, Dorte

    2013-01-01

    A hear-through device combines a microphone and earphone in an earpiece so that when worn, one per ear, it can work as an acoustically transparent system allowing for simultaneous individual binaural recording and playback of the real sound field at the ears. Recognizing the blocked entrance...... to the ear canal as the ideal recording point—i.e., all directional properties of the incident sound field are recorded without distortion—it is critical for such device to be sufficiently small so that it can be completely inserted into the ear canal. This is not always feasible and the device may stretch...... out from the ideal position and thus distort the captured spatial information. Here we present measurements that quantify by how much the directional properties of the sound field are distorted by semi-ideal hear-through prototypes built by mounting miniature microphones on the outer part of selected...

  11. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    DEFF Research Database (Denmark)

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities......The construction of detailed geological models for heterogeneous settings such as clay till is important to describe transport processes, particularly with regard to potential contamination pathways. In low-permeability clay matrices transport is controlled by diffusion, but fractures and sand......-lenses facilitate local advective flow. In glacial settings these geological features occur at diverse extent, geometry, degree of deformation, and spatial distribution. The high level of heterogeneity requires extensive data collection, respectively detailed geological mapping. However, when characterising...

  12. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study.

    Science.gov (United States)

    Fitzpatrick, Benjamin R; Lamb, David W; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment.

  13. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  14. Spatial Point Data Analysis of Geolocated Tweets in the First Day of Eid Al-Fitr 2017 in Java Island

    Science.gov (United States)

    Wibowo, T. W.

    2017-12-01

    Eid Al-Fitr is a worldwide Muslim feast day, which in Indonesia generally accompanied by tradition of going home (mudik). The demographic patterns at the time of the holiday are generally shifted, in which some urban residents will travel to their hometowns. The impact of this shifting is that there is a quite massive mobility of the population, which is generally accompanied by traffic congestion. The presence of location sensors on smartphone devices, open the opportunity to map the movement of the population in realtime or near-realtime. Especially now that social media applications have been integrated with the capability to include location information. One of the popular social media applications in Indonesia is Twitter, which provides microblogging facilities to its users. This study aims to analyze the pattern of Geolocated Tweets data uploaded by Twitter users on the first day of Eid Al-Fitr (1 Syawal 1438H). Geolocated Tweets data mining is done by using Streaming API (Application Programming Interface) and Python programming language. There are 13,224 Geolocated Tweets points obtained at the location of the study. Various point data analysis techniques applied to the data have been collected, such as density analysis, pattern analysis, and proximity analysis. In general, active Twitter users are dominated by residents in major cities, such as Jakarta, Bandung, Surabaya, Yogyakarta, Surakarta and Semarang. The results of the analysis can be used to determine whether the Geolocated Tweets data mined by the Streaming API method can be used to represent the movement of the population when mudik.

  15. The Hydrograph Analyst, an Arcview GIS Extension That Integrates Point, Spatial, and Temporal Data Provides A Graphical User Interface for Hydrograph Analysis

    International Nuclear Information System (INIS)

    Jones, M.L.; O'Brien, G.M.; Jones, M.L.

    2000-01-01

    The Hydrograph Analyst (HA) is an ArcView GIS 3.2 extension developed by the authors to analyze hydrographs from a network of ground-water wells and springs in a regional ground-water flow model. ArcView GIS integrates geographic, hydrologic, and descriptive information and provides the base functionality needed for hydrograph analysis. The HA extends ArcView's base functionality by automating data integration procedures and by adding capabilities to visualize and analyze hydrologic data. Data integration procedures were automated by adding functionality to the View document's Document Graphical User Interface (DocGUI). A menu allows the user to query a relational database and select sites which are displayed as a point theme in a View document. An ''Identify One to Many'' tool is provided within the View DocGUI to retrieve all hydrologic information for a selected site and display it in a simple and concise tabular format. For example, the display could contain various records from many tables storing data for one site. Another HA menu allows the user to generate a hydrograph for sites selected from the point theme. Hydrographs generated by the HA are added as hydrograph documents and accessed by the user with the Hydrograph DocGUI, which contains tools and buttons for hydrograph analysis. The Hydrograph DocGUI has a ''Select By Polygon'' tool used for isolating particular points on the hydrograph inside a user-drawn polygon or the user could isolate the same points by constructing a logical expression with the ArcView GIS ''Query Builder'' dialog that is also accessible in the Hydrograph DocGUI. Other buttons can be selected to alter the query applied to the active hydrograph. The selected points on the active hydrograph can be attributed (or flagged) individually or as a group using the ''Flag'' tool found on the Hydrograph DocGUI. The ''Flag'' tool activates a dialog box that prompts the user to select an attribute and ''methods'' or ''conditions'' that qualify

  16. High-spatial-resolution electron density measurement by Langmuir probe for multi-point observations using tiny spacecraft

    Science.gov (United States)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Trondsen, E.; Clausen, L. B. N.; Miloch, W. J.; Moen, J. I.

    2017-11-01

    A method for evaluating electron density using a single fixed-bias Langmuir probe is presented. The technique allows for high-spatio-temporal resolution electron density measurements, which can be effectively carried out by tiny spacecraft for multi-point observations in the ionosphere. The results are compared with the multi-needle Langmuir probe system, which is a scientific instrument developed at the University of Oslo comprising four fixed-bias cylindrical probes that allow small-scale plasma density structures to be characterized in the ionosphere. The technique proposed in this paper can comply with the requirements of future small-sized spacecraft, where the cost-effectiveness, limited space available on the craft, low power consumption and capacity for data-links need to be addressed. The first experimental results in both the plasma laboratory and space confirm the efficiency of the new approach. Moreover, detailed analyses on two challenging issues when deploying the DC Langmuir probe on a tiny spacecraft, which are the limited conductive area of the spacecraft and probe surface contamination, are presented in the paper. It is demonstrated that the limited conductive area, depending on applications, can either be of no concern for the experiment or can be resolved by mitigation methods. Surface contamination has a small impact on the performance of the developed probe.

  17. Geometric Computations On Indecisive Points

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Phillips, Jeff; Loffler, Maarten

    2011-01-01

    We study computing with indecisive point sets. Such points have spatial uncertainty where the true location is one of a finite number of possible locations. This data arises from probing distributions a few times or when the location is one of a few locations from a known database. In particular......, we study computing distributions of geometric functions such as the radius of the smallest enclosing ball and the diameter. Surprisingly, we can compute the distribution of the radius of the smallest enclosing ball exactly in polynomial time, but computing the same distribution for the diameter is #P...

  18. Spatial and temporal variations in non-point source losses of nitrogen and phosphorus in a small agricultural catchment in the Three Gorges Region.

    Science.gov (United States)

    Chen, Chenglong; Gao, Ming; Xie, Deti; Ni, Jiupai

    2016-04-01

    Losses of agricultural pollutants from small catchments are a major issue for water quality in the Three Gorges Region. Solutions are urgently needed. However, before pollutant losses can be controlled, information about spatial and temporal variations in pollutant losses is needed. The study was carried out in the Wangjiagou catchment, a small agricultural catchment in Fuling District, Chongqing, and the data about non-point source losses of nitrogen and phosphorus was collected here. Water samples were collected daily by an automatic water sampler at the outlets of two subcatchments from 2012 to 2014. Also, samples of surface runoff from 28 sampling sites distributed through the subcatchments were collected during 12 rainfall events in 2014. A range of water quality variables were analyzed for all samples and were used to demonstrate the variation in non-point losses of nitrogen and phosphorus over a range of temporal and spatial scales and in different types of rainfall in the catchment. Results showed that there was a significant linear correlation between the mass concentrations of total nitrogen (TN) and nitrate (NO3-N) in surface runoff and that the relationship was maintained with changes in time. Concentrations of TN and NO3-N peaked after fertilizer was applied to crops in spring and autumn; concentrations decreased rapidly after the peak values in spring but declined slowly in autumn. N and P concentrations fluctuated more and showed a greater degree of dispersion during the spring crop cultivation period than those in autumn. Concentrations of TN and NO3-N in surface runoff were significantly and positively correlated with the proportion of the area that was planted with corn and mustard tubers, but were negatively correlated with the proportion of the area taken up with rice and mulberry plantations. The average concentrations of TN and NO3-N in surface runoff reached the highest level from the sampling points at the bottom of the land used for corn

  19. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  20. Scopus database: a review

    OpenAIRE

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  1. Evaluating spatial interaction of soil property with non‐point source pollution at watershed scale: The phosphorus indicator in Northeast China

    International Nuclear Information System (INIS)

    Ouyang, Wei; Huang, Haobo; Hao, Fanghua; Shan, Yushu; Guo, Bobo

    2012-01-01

    To better understand the spatial dynamics of non-point source (NPS) phosphorus loading with soil property at watershed scale, integrated modeling and soil chemistry is crucial to ensure that the indicator is functioning properly and expressing the spatial interaction at two depths. Developments in distributed modeling have greatly enriched the availability of geospatial data analysis and assess the NPS pollution loading response to soil property over larger area. The 1.5 km-grid soil sampling at two depths was analyzed with eight parameters, which provided detailed spatial and vertical soil data under four main types of landuses. The impacts of landuse conversion and agricultural practice on soil property were firstly identified. Except for the slightly bigger total of potassium (TK) and cadmium (Cr), the other six parameters had larger content in 20–40 cm surface than the top 20 cm surface. The Soil and Water Assessment Tool was employed to simulate the loading of NPS phosphorus. Overlaying with the landuse distribution, it was found that the NPS phosphorus mainly comes from the subbasins dominated with upland and paddy rice. The linear correlations of eight soil parameters at two depths with NPS phosphorus loading in the subbasins of upland and paddy rice were compared, respectively. The correlations of available phosphorus (AP), total phosphorus (TP), total nitrogen (TN) and TK varied in two depths, and also can assess the loading. The soil with lower soil organic carbon (SOC) presented a significant higher risk for NPS phosphorus loading, especially in agricultural area. The Principal Component Analysis showed that the TP and zinc (Zn) in top soil and copper (Cu) and Cr in subsurface can work as indicators. The analysis suggested that the application of soil property indicators is useful for assessing NPS phosphorus loss, which is promising for water safety in agricultural area. -- Highlights: ► Spatial dynamics of NPS phosphorus pollution with soil

  2. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  3. Prism adaptation aftereffects in stroke patients with spatial neglect: Pathological effects on subjective straight ahead but not visual open-loop pointing

    Science.gov (United States)

    Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Papps, Ben; Husain, Masud; Driver, Jon

    2008-01-01

    Prism adaptation to rightward optical shifts during visually guided pointing is considered a promising intervention in right-hemisphere stroke patients with left spatial neglect. Conventionally, prism adaptation is assessed via aftereffects, on subjective straight ahead (SSA) pointing with eyes closed; or by visual open-loop pointing (VOL), i.e. pointing to a visual target without seeing the hand. Previous data suggest indirectly that prism aftereffects in neglect patients may be larger (pathologically so) when assessed by SSA than by VOL. But these measures have never been directly compared within the same patients after identical prism exposure. Accordingly we implemented both measures here within the same group of 13 neglect patients and 13 controls. Prism aftereffects were much larger for SSA than VOL in neglect patients, falling outside the normative range only for SSA. This may arise because the SSA task can itself involve aspects of neglect that may be ameliorated by the prism intervention, hence showing abnormal changes after prisms. The extent of SSA change after prisms varied between patients, and correlated with improvements on a standard cancellation measure for neglect. The lesions of patients who did versus did not show neglect improvement immediately after prisms provide an initial indication that lack of improvement may potentially relate to cortical damage in right intraparietal sulcus and white matter damage in inferior parietal lobe and middle frontal gyrus. Future studies of possible rehabilitative impact from prisms upon neglect may need to consider carefully how to measure prism adaptation per se, separately from any impact of such adaptation upon manifestations of neglect. PMID:18083203

  4. Model for Semantically Rich Point Cloud Data

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  5. MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    F. Poux

    2017-10-01

    Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  6. A dose point kernel database using GATE Monte Carlo simulation toolkit for nuclear medicine applications: comparison with other Monte Carlo codes.

    Science.gov (United States)

    Papadimitroulas, Panagiotis; Loudos, George; Nikiforidis, George C; Kagadis, George C

    2012-08-01

    GATE is a Monte Carlo simulation toolkit based on the Geant4 package, widely used for many medical physics applications, including SPECT and PET image simulation and more recently CT image simulation and patient dosimetry. The purpose of the current study was to calculate dose point kernels (DPKs) using GATE, compare them against reference data, and finally produce a complete dataset of the total DPKs for the most commonly used radionuclides in nuclear medicine. Patient-specific absorbed dose calculations can be carried out using Monte Carlo simulations. The latest version of GATE extends its applications to Radiotherapy and Dosimetry. Comparison of the proposed method for the generation of DPKs was performed for (a) monoenergetic electron sources, with energies ranging from 10 keV to 10 MeV, (b) beta emitting isotopes, e.g., (177)Lu, (90)Y, and (32)P, and (c) gamma emitting isotopes, e.g., (111)In, (131)I, (125)I, and (99m)Tc. Point isotropic sources were simulated at the center of a sphere phantom, and the absorbed dose was stored in concentric spherical shells around the source. Evaluation was performed with already published studies for different Monte Carlo codes namely MCNP, EGS, FLUKA, ETRAN, GEPTS, and PENELOPE. A complete dataset of total DPKs was generated for water (equivalent to soft tissue), bone, and lung. This dataset takes into account all the major components of radiation interactions for the selected isotopes, including the absorbed dose from emitted electrons, photons, and all secondary particles generated from the electromagnetic interactions. GATE comparison provided reliable results in all cases (monoenergetic electrons, beta emitting isotopes, and photon emitting isotopes). The observed differences between GATE and other codes are less than 10% and comparable to the discrepancies observed among other packages. The produced DPKs are in very good agreement with the already published data, which allowed us to produce a unique DPKs dataset using

  7. SU-E-T-310: Targeting Safety Improvements Through Analysis of Near-Miss Error Detection Points in An Incident Learning Database

    Energy Technology Data Exchange (ETDEWEB)

    Novak, A; Nyflot, M; Sponseller, P; Howard, J; Logan, W; Holland, L; Jordan, L; Carlson, J; Ermoian, R; Kane, G; Ford, E; Zeng, J [University of Washington, Seattle, WA (United States)

    2014-06-01

    Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was used to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety

  8. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  9. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  10. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  11. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  12. Is the spatial distribution of brain lesions associated with closed-head injury predictive of subsequent development of attention-deficit/hyperactivity disorder? Analysis with brain-image database

    Science.gov (United States)

    Herskovits, E. H.; Megalooikonomou, V.; Davatzikos, C.; Chen, A.; Bryan, R. N.; Gerring, J. P.

    1999-01-01

    PURPOSE: To determine whether there is an association between the spatial distribution of lesions detected at magnetic resonance (MR) imaging of the brain in children after closed-head injury and the development of secondary attention-deficit/hyperactivity disorder (ADHD). MATERIALS AND METHODS: Data obtained from 76 children without prior history of ADHD were analyzed. MR images were obtained 3 months after closed-head injury. After manual delineation of lesions, images were registered to the Talairach coordinate system. For each subject, registered images and secondary ADHD status were integrated into a brain-image database, which contains depiction (visualization) and statistical analysis software. Using this database, we assessed visually the spatial distributions of lesions and performed statistical analysis of image and clinical variables. RESULTS: Of the 76 children, 15 developed secondary ADHD. Depiction of the data suggested that children who developed secondary ADHD had more lesions in the right putamen than children who did not develop secondary ADHD; this impression was confirmed statistically. After Bonferroni correction, we could not demonstrate significant differences between secondary ADHD status and lesion burdens for the right caudate nucleus or the right globus pallidus. CONCLUSION: Closed-head injury-induced lesions in the right putamen in children are associated with subsequent development of secondary ADHD. Depiction software is useful in guiding statistical analysis of image data.

  13. Spatial Data Management

    CERN Document Server

    Mamoulis, Nikos

    2011-01-01

    Spatial database management deals with the storage, indexing, and querying of data with spatial features, such as location and geometric extent. Many applications require the efficient management of spatial data, including Geographic Information Systems, Computer Aided Design, and Location Based Services. The goal of this book is to provide the reader with an overview of spatial data management technology, with an emphasis on indexing and search techniques. It first introduces spatial data models and queries and discusses the main issues of extending a database system to support spatial data.

  14. Advanced techniques for the storage and use of very large, heterogeneous spatial databases. The representation of geographic knowledge: Toward a universal framework. [relations (mathematics)

    Science.gov (United States)

    Peuquet, Donna J.

    1987-01-01

    A new approach to building geographic data models that is based on the fundamental characteristics of the data is presented. An overall theoretical framework for representing geographic data is proposed. An example of utilizing this framework in a Geographic Information System (GIS) context by combining artificial intelligence techniques with recent developments in spatial data processing techniques is given. Elements of data representation discussed include hierarchical structure, separation of locational and conceptual views, and the ability to store knowledge at variable levels of completeness and precision.

  15. Address Points, Address points were attributed according to NENA standards and field verfied between the dates of June 2008 thru August 2008. The address points were then matched to the Verizon Telco database with a 99% hit rate in October of 2008., Published in 2006, 1:1200 (1in=100ft) scale, Eastern Shore Regional GIS Cooperative.

    Data.gov (United States)

    NSGIC Regional | GIS Inventory — Address Points dataset current as of 2006. Address points were attributed according to NENA standards and field verfied between the dates of June 2008 thru August...

  16. Database Replication

    Directory of Open Access Journals (Sweden)

    Marius Cristian MAZILU

    2010-12-01

    Full Text Available For someone who has worked in an environment in which the same database is used for data entry and reporting, or perhaps managed a single database server that was utilized by too many users, the advantages brought by data replication are clear. The main purpose of this paper is to emphasize those advantages as well as presenting the different types of Database Replication and the cases in which their use is recommended.

  17. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  18. Maintaining Multimedia Data in a Geospatial Database

    Science.gov (United States)

    2012-09-01

    at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database...THE TABLE ......................13  1.  PostgreSQL .........................................................................................14  2

  19. Red Sesbania Distribution - points [ds80

    Data.gov (United States)

    California Natural Resource Agency — This layer contains point data for the red sesbania (Sesbania punicea) database. The database represents historic and current observations by various individuals of...

  20. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  1. Potash: a global overview of evaporate-related potash resources, including spatial databases of deposits, occurrences, and permissive tracts: Chapter S in Global mineral resource assessment

    Science.gov (United States)

    Orris, Greta J.; Cocker, Mark D.; Dunlap, Pamela; Wynn, Jeff C.; Spanski, Gregory T.; Briggs, Deborah A.; Gass, Leila; Bliss, James D.; Bolm, Karen S.; Yang, Chao; Lipin, Bruce R.; Ludington, Stephen; Miller, Robert J.; Słowakiewicz, Mirosław

    2014-01-01

    Potash is mined worldwide to provide potassium, an essential nutrient for food crops. Evaporite-hosted potash deposits are the largest source of salts that contain potassium in water-soluble form, including potassium chloride, potassium-magnesium chloride, potassium sulfate, and potassium nitrate. Thick sections of evaporitic salt that form laterally continuous strata in sedimentary evaporite basins are the most common host for stratabound and halokinetic potash-bearing salt deposits. Potash-bearing basins may host tens of millions to more than 100 billion metric tons of potassium oxide (K2O). Examples of these deposits include those in the Elk Point Basin in Canada, the Pripyat Basin in Belarus, the Solikamsk Basin in Russia, and the Zechstein Basin in Germany.

  2. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  3. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  4. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  5. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  6. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  7. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  8. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    Science.gov (United States)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a

  9. Airports and Airfields - Volusia County Airports (Points)

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — The Public Use Airports database is a geographic point database of aircraft landing facilities in the United States and U.S. Territories. This database has been...

  10. Design and implementation of typical target image database system

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  11. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  12. A new world lakes database for global hydrological modelling

    Science.gov (United States)

    Pimentel, Rafael; Hasan, Abdulghani; Isberg, Kristina; Arheimer, Berit

    2017-04-01

    Lakes are crucial systems in global hydrology, they constitutes approximately a 65% of the total amount of surface water over the world. The recent advances in remote sensing technology have allowed getting new higher spatiotemporal resolution for global water bodies information. Within them, ESA global map of water bodies, stationary map at 150 m spatial resolution, (Lamarche et al., 2015) and the new high-resolution mapping of global surface water and its long-term changes, 32 years product with a 30 m spatial resolution (Pekel et al., 2016). Nevertheless, these databases identifies all the water bodies, they do not make differences between lakes, rivers, wetlands and seas. Some global databases with isolate lake information are available, i.e. GLWD (Global Lakes and Wetland Database) (Lernhard and Döll, 2004), however the location of some of the lakes is shifted in relation with topography and their extension have also experimented changes since the creation of the database. This work presents a new world lake database based on ESA global map water bodies and relied on the lakes in GLWD. Lakes from ESA global map of water bodies were identified using a flood fill algorithm, which is initialized using the centroid of the lakes defined in GLWD. Some manual checks were done to split lakes that are really connected but identified as different lakes in GLWD database. In this way the database associated information provided in GLDW is maintained. Moreover, the locations of the outlet of all them were included in the new database. The high resolution upstream area information provided by Global Width Database for Large Rivers (GWD-LR) was used for that. This additional points location constitutes very useful information for watershed delineation by global hydrological modelling.. The methodology was validated using in situ information from Sweden lakes and extended over the world. 13 500 lakes greater than 0.1 km2 were identified.

  13. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    Science.gov (United States)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models

  14. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  15. Solubility Database

    Science.gov (United States)

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  16. Users as essential contributors to spatial cyberinfrastructures

    Science.gov (United States)

    Poore, B.S.

    2011-01-01

    Current accounts of spatial cyberinfrastructure development tend to overemphasize technologies to the neglect of critical social and cultural issues on which adoption depends. Spatial cyberinfrastructures will have a higher chance of success if users of many types, including nonprofessionals, are made central to the development process. Recent studies in the history of infrastructures reveal key turning points and issues that should be considered in the development of spatial cyberinfrastructure projects. These studies highlight the importance of adopting qualitative research methods to learn how users work with data and digital tools, and how user communities form. The author's empirical research on data sharing networks in the Pacific Northwest salmon crisis at the turn of the 21st century demonstrates that ordinary citizens can contribute critical local knowledge to global databases and should be considered in the design and construction of spatial cyberinfrastructures.

  17. Developing of the database of meteorological and radiation fields for Moscow region (urban reanalysis) for 1981-2014 period with high spatial and temporal resolution. Strategy and first results.

    Science.gov (United States)

    Konstantinov, Pavel; Varentsov, Mikhail; Platonov, Vladimir; Samsonov, Timofey; Zhdanova, Ekaterina; Chubarova, Natalia

    2017-04-01

    The main goal of this investigation is to develop a kind of "urban reanalysis" - the database of meteorological and radiation fields under Moscow megalopolis for period 1981-2014 with high spatial resolution. Main meteorological fields for Moscow region are reproduced with COSMO_CLM regional model (including urban parameters) with horizontal resolution 1x1 km. Time resolution of output fields is 1 hour. For radiation fields is quite useful to calculate SVF (Sky View Factor) for obtaining losses of UV radiation in complex urban conditions. Usually, the raster-based SVF analysis the shadow-casting algorithm proposed by Richens (1997) is popular (see Ratti and Richens 2004, Gal et al. 2008, for example). SVF image is obtained by combining shadow images obtained from different directions. An alternative is to use raster-based SVF calculation similar to vector approach using digital elevation model of urban relief. Output radiation field includes UV-radiation with horizontal resolution 1x1 km This study was financially supported by the Russian Foundation for Basic Research within the framework of the scientific project no. 15-35-21129 _mol_a_ved and project no 15-35-70006 mol_a_mos References: 1. Gal, T., Lindberg, F., and Unger, J., 2008. Computing continuous sky view factors using 3D urban raster and vector databases: comparison and application to urban climate. Theoretical and applied climatology, 95 (1-2), 111-123. 2. Richens, P., 1997. Image processing for urban scale environmental modelling. In: J.D. Spitler and J.L.M. Hensen, eds. th Intemational IBPSA Conference Building Simulation, Prague. 3. Ratti, C. and Richens, P., 2004. Raster analysis of urban form. Environment and Planning B: Planning and Design, 31 (2), 297-309.

  18. Spatial distribution of nearshore fish in the vicinity of two thermal generating stations, Nanticoke and Douglas Point, on the Great Lakes

    International Nuclear Information System (INIS)

    Minns, C.K.; Kelso, J.R.M.; Hyatt, W.

    1978-01-01

    At Nanticoke, Lake Erie, 1974, mean fish density varied considerably, range 162-14 204/10 000 m 3 , as estimated by digital acoustic fish enumeration. At Douglas Point, Lake Huron, 1975, mean density varied less, range 108-671/10 000 m 3 . At both sites fish densities were generally greatest in the shallowest, 3-5 m, depths. At Nanticoke, where the nearshore has low relief, there were no distinguishable communities. At Douglas Point, where depth increases rapidly offshore, there was evidence of benthic and pelagic communities. There was no evidence of altered fish distribution in relation to temperature. At Nanticoke there was no vertical variation in temperature and no vertical response was to be expected. At Douglas Point there was thermal stratification present in the summer and there was no apparent response. The influence of incident radiation was uncertain because of the effects of diurnal migrations. At both locations fish were clustered horizontally to varying degrees in the spring and fall, while in the summer fish were distributed more evenly. Densest clusters were usually in the vicinity of the turbulent discharge at both locations. The lack of temperature response and the similarity of Nanticoke with situations at nearby streams on Lake Erie suggest that the fish are responding to currents and perhaps topography. (author)

  19. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  20. The design of the layout of faceted multi-channel electro-optical spatial coordinates measuring instrument for point-like bright objects

    Science.gov (United States)

    Repin, Vladislav A.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2017-06-01

    For many applied problems it is necessary to obtain information about the situation in a wide angular field in order to measure various parameters of objects: their spatial coordinates, instantaneous velocities, and so on. In this case, one interesting bionic approach can be used - a mosaic (or discrete, otherwise, facet) angular field. Such electro-optical system constructively imitates the visual apparatus of insects: many photodetectors like ommatidia (elements of the facet eye structure) are located on a non-planar surface. Such devices can be used in photogrammetry and aerial photography systems (if the space is sufficient), in the transport sector as vehicle orientation organs, as systems for monitoring in unmanned aerial vehicles, in endoscopy for obtaining comprehensive information on the state of various cavities, in intelligent robotic systems. In this manuscript discusses the advantages and disadvantages of multi-channeled optoelectronic systems with a mosaic angular field, presents possible options for their use, and discusses some of the design procedures performed when developing a layout of a coordinate measuring device.

  1. Soil erosion and sediment delivery in a mountain catchment under land use change: using point fallout 137Cs for calibrating a spatially distributed numerical model

    Science.gov (United States)

    Alatorre, L. C.; Beguería, S.; Lana-Renault, N.; Navas, A.; García-Ruiz, J. M.

    2011-12-01

    Soil erosion and sediment yield are strongly affected by land use/land cover (LULC). Spatially distributed erosion models are useful tools for comparing erosion resulting from current LULC with a number of alternative scenarios, being of great interest to assess the expected effect of LULC changes. In this study the soil erosion and sediment delivery model WATEM/SEDEM was applied to a small experimental catchment in the Central Spanish Pyrenees. Model calibration was carried out based on a dataset of soil redistribution rates derived from 137Cs inventories along three representative transects, allowing capture differences per land use in the main model parameters. Model calibration showed a good convergence to a global optimum in the parameter space. Validation of the model results against seven years of recorded sediment yield at the catchment outlet was satisfactory. Two LULC scenarios where then modeled to reproduce the land use at the beginning of the twentieth Century and a hypothetic future scenario, and to compare the simulation results to the current LULC situation. The results show a reduction of about one order of magnitude in gross erosion (3180 to 350 Mg yr-1) and sediment delivery (11.2 to 1.2 Mg yr-1 ha-1) during the last decades as a result of the abandonment of traditional land uses (mostly agriculture) and subsequent vegetation re-colonization. The simulation also allowed assessing differences in the sediment sources and sinks within the catchment.

  2. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  3. Spatial Quantification of Non-Point Source Pollution in a Meso-Scale Catchment for an Assessment of Buffer Zones Efficiency

    Directory of Open Access Journals (Sweden)

    Mikołaj Piniewski

    2015-04-01

    Full Text Available The objective of this paper was to spatially quantify diffuse pollution sources and estimate the potential efficiency of applying riparian buffer zones as a conservation practice for mitigating chemical pollutant losses. This study was conducted using a semi-distributed Soil and Water Assessment Tool (SWAT model that underwent extensive calibration and validation in the Sulejów Reservoir catchment (SRC, which occupies 4900 km2 in central Poland. The model was calibrated and validated against daily discharges (10 gauges, NO3-N and TP loads (7 gauges. Overall, the model generally performed well during the calibration period but not during the validation period for simulating discharge and loading of NO3-N and TP. Diffuse agricultural sources appeared to be the main contributors to the elevated NO3-N and TP loads in the streams. The existing, default representation of buffer zones in SWAT uses a VFS sub-model that only affects the contaminants present in surface runoff. The results of an extensive monitoring program carried out in 2011–2013 in the SRC suggest that buffer zones are highly efficient for reducing NO3-N and TP concentrations in shallow groundwater. On average, reductions of 56% and 76% were observed, respectively. An improved simulation of buffer zones in SWAT was achieved through empirical upscaling of the measurement results. The mean values of the sub-basin level reductions are 0.16 kg NO3/ha (5.9% and 0.03 kg TP/ha (19.4%. The buffer zones simulated using this approach contributed 24% for NO3-N and 54% for TP to the total achieved mean reduction at the sub-basin level. This result suggests that additional measures are needed to achieve acceptable water quality status in all water bodies of the SRC, despite the fact that the buffer zones have a high potential for reducing contaminant emissions.

  4. Testing Local Independence between Two Point Processes

    DEFF Research Database (Denmark)

    Allard, Denis; Brix, Anders; Chadæuf, Joël

    2001-01-01

    Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...

  5. Database construction for vision aided navigation in planetary landing

    Science.gov (United States)

    Yu, Meng; Cui, Hutao; Li, Shuang; Tian, Yang

    2017-11-01

    In this paper, a novel database construction method for passive-image based navigation system within the planetary precise pin-point landing (PPL) background is presented. The key concept is selecting qualified visual features to construct the visual database by examining their contribution to the navigation system. We first define a metric named feature exploitability to evaluate the visual feature's distinctiveness and its spatial imagery distribution. After that, a greedy selection method is employed to construct the database by selecting features with high feature-exploitability scores. Then, a hierarchical feature retrieval method is proposed to achieve the adaptation of image-scale variation during landing and improve the efficiency of feature retrieval. To evaluate our proposed approach, the Monte Carlo simulation and an experimental test are conducted, simulation results show the advantage of the feature exploitability driven database construction method over other database construction methods and the necessity of proper database construction in a vision-aided navigation system for PPL mission.

  6. Court Buildings, LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the locations and building footprints of schools, churches, government buildings, law enforcement and emergency response offices, pha, Published in 2011, 1:12000 (1in=1000ft) scale, LSU Louisiana Geographic Information Center (LAGIC).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Court Buildings dataset current as of 2011. LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including...

  7. Grocery Stores, LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the locations and building footprints of schools, churches, government buildings, law enforcement and emergency response offices, pha, Published in 2011, 1:12000 (1in=1000ft) scale, LSU Louisiana Geographic Information Center (LAGIC).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Grocery Stores dataset current as of 2011. LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including...

  8. Fire Stations, LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the locations and building footprints of schools, churches, government buildings, law enforcement and emergency response offices, pha, Published in 2011, 1:12000 (1in=1000ft) scale, LSU Louisiana Geographic Information Center (LAGIC).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Fire Stations dataset current as of 2011. LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including...

  9. Libraries, LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the locations and building footprints of schools, churches, government buildings, law enforcement and emergency response offices, pha, Published in 2011, 1:12000 (1in=1000ft) scale, LSU Louisiana Geographic Information Center (LAGIC).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Libraries dataset current as of 2011. LAGIC is consulting with local parish GIS departments to create spatially accurate point and polygons data sets including the...

  10. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  11. Artificial Radionuclides Database in the Pacific Ocean: HAM Database

    Directory of Open Access Journals (Sweden)

    Michio Aoyama

    2004-01-01

    Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 1957–1998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.

  12. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  13. Spatial measurement errors in the field of spatial epidemiology

    OpenAIRE

    Zhang, Zhijie; Manjourides, Justin; Cohen, Ted; Hu, Yi; Jiang, Qingwu

    2016-01-01

    Background: Spatial epidemiology has been aided by advances in geographic information systems, remote sensing, global positioning systems and the development of new statistical methodologies specifically designed for such data. Given the growing popularity of these studies, we sought to review and analyze the types of spatial measurement errors commonly encountered during spatial epidemiological analysis of spatial data. Methods: Google Scholar, Medline, and Scopus databases were searched usi...

  14. Spatial Culture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2012-01-01

    , the notion of aesthetics (taken in the original signification of aisthesis: sensory perception) helped to map the relations between city, human experience, and various forms of art and culture. Delving into our simultaneously optical and tactical reception of space (a dialectics pointed out by Walter...... Benjamin), studies in urbanity and aesthetics may highlight mul-tisensory everyday practices that pass unnoticed in the current era of visual domination. A humanistic approach to urban and spatial cultures should also learn from German sociologist and philosopher Georg Simmel’s hypothesis of a modern need......: Memory”, and ”Staging and Interpretation: Places”....

  15. Allegheny County Cell Tower Points

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset portrays cell tower locations as points in Allegheny County. The dataset is based on outbuilding codes in the Property Assessment Parcel Database used...

  16. Calcareous Fens - Source Feature Points

    Data.gov (United States)

    Minnesota Department of Natural Resources — Pursuant to the provisions of Minnesota Statutes, section 103G.223, this database contains points that represent calcareous fens as defined in Minnesota Rules, part...

  17. Holocene Sea-Level Database For The Caribbean Region

    Science.gov (United States)

    Khan, N. S.; Horton, B.; Engelhart, S. E.; Peltier, W. R.; Scatena, F. N.; Vane, C. H.; Liu, S.

    2013-12-01

    Holocene relative sea-level (RSL) records from far-field locations are important for understanding the driving mechanisms controlling the nature and timing of the mid-late Holocene reduction in global meltwaters and providing background rates of late Holocene RSL change with which to compare the magnitude of 20th century RSL rise. The Caribbean region has traditionally been considered far-field (i.e., with negligible glacio-isostatic adjustment (GIA) influence), although recent investigations indicate otherwise. Here, we consider the spatial variability in glacio-isostatic, tectonic and local contributions on RSL records from the circum-Caribbean region to infer a Holocene eustatic sea-level signal. We have constructed a database of quality-controlled, spatially comprehensive, Holocene RSL observations for the circum-Caribbean region. The database contains over 500 index points, which locate the position of RSL in time and space. The database incorporates sea-level observations from a latitudinal range of 5°N to 25°N and longitudinal range of 55°W to 90°W. We include sea-level observations from 11 ka BP to present, although the majority of the index points in the database are younger than 8 ka BP. The database is sub-divided into 13 regions based on the distance from the former Laurentide Ice Sheet and regional tectonic setting. The index points were primarily derived from mangrove peat deposits, which in the Caribbean form in the upper half of the tidal range, and corals (predominantly Acropora palmata), the growth of which is constrained to the upper 5 m of water depth. The index points are classified on the basis of their susceptibility to compaction (e.g., intercalated, basal). The influence of temporal changes in tidal range on index points is also considered. The sea-level reconstructions demonstrate that RSL did not exceed the present height (0 m) during the Holocene in the majority of locations, except at sites in Suriname/Guayana and possibly Trinidad

  18. SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES

    Directory of Open Access Journals (Sweden)

    F. Poux

    2016-10-01

    Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  19. Smart Point Cloud: Definition and Remaining Challenges

    Science.gov (United States)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  20. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    Science.gov (United States)

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  1. Tipping Point

    Medline Plus

    Full Text Available ... en español Blog About OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by ... danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe ...

  2. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  3. Spatial attention systems in spatial neglect.

    Science.gov (United States)

    Karnath, Hans-Otto

    2015-08-01

    It has been established that processes relating to 'spatial attention' are implemented at cortical level by goal-directed (top-down) and stimulus-driven (bottom-up) networks. Spatial neglect in brain-damaged individuals has been interpreted as a distinguished exemplar for a disturbance of these processes. The present paper elaborates this assumption. Functioning of the two attentional networks seem to dissociate in spatial neglect; behavioral studies of patients' orienting and exploration behavior point to a disturbed stimulus-driven but preserved goal-directed attention system. When a target suddenly appears somewhere in space, neglect patients demonstrate disturbed detection and orienting if it is located in contralesional direction. In contrast, if neglect patients explore a scene with voluntarily, top-down controlled shifts of spatial attention, they perform movements that are oriented into all spatial directions without any direction-specific disturbances. The paper thus argues that not the top-down control of spatial attention itself, rather a body-related matrix on top of which this process is executed, seems affected. In that sense, the traditional role of spatial neglect as a stroke model for 'spatial attention' requires adjustment. Beyond its insights into the human stimulus-driven attentional system, the disorder most notably provides vistas in how our brain encodes topographical information and organizes spatially oriented action - including the top-down control of spatial attention - in relation to body position. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The National Land Cover Database

    Science.gov (United States)

    Homer, Collin H.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  5. Spatial agglomeration dynamics

    OpenAIRE

    Danny Quah

    2002-01-01

    This paper develops a model of economic growth and activity locating endogenously on a 3- dimensional featureless global geography. The same economic forces influence simultaneously growth, convergence, and spatial agglomeration and clustering. Economic activity is not concentrated on discrete isolated points but instead a dynamically- fluctuating, smooth spatial distribution. Spatial inequality is a Cass-Koopmans saddlepath, and the global distribution of economic activity converges towards ...

  6. Protein-Protein Interaction Databases

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Jensen, Lars Juhl

    2015-01-01

    of research are explored. Here we present an overview of the most widely used protein-protein interaction databases and the methods they employ to gather, combine, and predict interactions. We also point out the trade-off between comprehensiveness and accuracy and the main pitfall scientists have to be aware...

  7. Databases and their application

    NARCIS (Netherlands)

    Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The

  8. Tipping Point

    Medline Plus

    Full Text Available ... OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point ... 24 hours a day. For young children whose home is a playground, it’s the best way to ...

  9. Fixed Points

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  10. Dietary Supplement Ingredient Database

    Science.gov (United States)

    ... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...

  11. NoSQL Databases

    OpenAIRE

    PANYKO, Tomáš

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  12. Collecting Taxes Database

    Data.gov (United States)

    US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...

  13. USAID Anticorruption Projects Database

    Data.gov (United States)

    US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...

  14. Impact of Patient Factors on Recurrence Risk and Time Dependency of Oxaliplatin Benefit in Patients With Colon Cancer: Analysis From Modern-Era Adjuvant Studies in the Adjuvant Colon Cancer End Points (ACCENT) Database

    Science.gov (United States)

    Renfro, Lindsay A.; Allegra, Carmen J.; André, Thierry; de Gramont, Aimery; Schmoll, Hans-Joachim; Haller, Daniel G.; Alberts, Steven R.; Yothers, Greg; Sargent, Daniel J.

    2016-01-01

    Purpose Fluorouracil plus leucovorin (FU + LV) adjuvant chemotherapy reduced the risk of recurrence and death across all time points in a pooled analysis of 20,898 patients with colon cancer from 18 randomized studies. The impact of oxaliplatin added to FU + LV on the time course of recurrence and survival remains unknown. Patients and Methods A total of 12,233 patients enrolled to the randomized trials C-07, C-08, N0147, MOSAIC (Adjuvant Treatment of Colon Cancer), and XELOXA (Adjuvant XELOX) were pooled to examine the impact of oxaliplatin and tumor-specific factors on the time course of recurrence and death. For each end point, continuous-time risk was modeled over 6 years post treatment in all oxaliplatin-treated patients and patients concurrently randomized to FU + LV with or without oxaliplatin; the latter analyses supported time-dependent treatment comparisons. Results Addition of oxaliplatin significantly reduced the risk of recurrence within the first 14 months post treatment for patients with stage II disease and within the first 4 years for patients with stage III disease. Oxaliplatin also significantly reduced risk of death from 2 to 6 years post treatment for patients with stage III disease, with no differences in timing of outcomes between treatment groups (ie, oxaliplatin did not simply postpone recurrence or death compared with FU + LV alone). Patients with stage II disease receiving oxaliplatin did not exhibit a significant reduction in risk of death in the first 6 years post treatment. Recurrence risk peaked near 14 months for both treatments, and risk of recurrence and death increased with increased tumor and nodal burden. Conclusions These analyses support the addition of oxaliplatin to fluoropyrimidine-based adjuvant therapy in patients with stage III disease and underscore the need for adequate surveillance of patients with colon cancer during the first 3 years after adjuvant therapy. PMID:26811529

  15. Applying and extending Oracle Spatial

    CERN Document Server

    Simon Gerard Greener, Siva Ravada

    2013-01-01

    This book is an advanced practical guide to applying and extending Oracle Spatial.This book is for existing users of Oracle and Oracle Spatial who have, at a minimum, basic operational experience of using Oracle or an equivalent database. Advanced skills are not required.

  16. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... see news reports about horrible accidents involving young children and furniture, appliance and tv tip-overs. The ...

  17. SPATIAL DATA INTEGRATION USING ONTOLOGY-BASED APPROACH

    Directory of Open Access Journals (Sweden)

    S. Hasani

    2015-12-01

    Full Text Available In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  18. Selection of nuclear power information database management system

    International Nuclear Information System (INIS)

    Zhang Shuxin; Wu Jianlei

    1996-01-01

    In the condition of the present database technology, in order to build the Chinese nuclear power information database (NPIDB) in the nuclear industry system efficiently at a high starting point, an important task is to select a proper database management system (DBMS), which is the hinge of the matter to build the database successfully. Therefore, this article explains how to build a practical information database about nuclear power, the functions of different database management systems, the reason of selecting relation database management system (RDBMS), the principles of selecting RDBMS, the recommendation of ORACLE management system as the software to build database and so on

  19. Basic database performance tuning - developer's perspective

    CERN Document Server

    Kwiatek, Michal

    2008-01-01

    This lecture discusses selected database performance issues from the developer's point of view: connection overhead, bind variables and SQL injection, making most of the optimizer with up-to-date statistics, reading execution plans. Prior knowledge of SQL is expected.

  20. Overview of selected molecular biological databases

    Energy Technology Data Exchange (ETDEWEB)

    Rayl, K.D.; Gaasterland, T.

    1994-11-01

    This paper presents an overview of the purpose, content, and design of a subset of the currently available biological databases, with an emphasis on protein databases. Databases included in this summary are 3D-ALI, Berlin RNA databank, Blocks, DSSP, EMBL Nucleotide Database, EMP, ENZYME, FSSP, GDB, GenBank, HSSP, LiMB, PDB, PIR, PKCDD, ProSite, and SWISS-PROT. The goal is to provide a starting point for researchers who wish to take advantage of the myriad available databases. Rather than providing a complete explanation of each database, we present its content and form by explaining the details of typical entries. Pointers to more complete ``user guides`` are included, along with general information on where to search for a new database.

  1. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  2. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  3. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  4. Oracle database systems administration

    OpenAIRE

    Šilhavý, Dominik

    2017-01-01

    Master's thesis with the name Oracle database systems administration describes problems in databases and how to solve them, which is important for database administrators. It helps them in delivering faster solutions without the need to look for or figure out solutions on their own. Thesis describes database backup and recovery methods that are closely related to problems solutions. The main goal is to provide guidance and recommendations regarding database troubles and how to solve them. It ...

  5. Web geoprocessing services on GML with a fast XML database ...

    African Journals Online (AJOL)

    Nowadays there exist quite a lot of Spatial Database Infrastructures (SDI) that facilitate the Geographic Information Systems (GIS) user community in getting access to distributed spatial data through web technology. However, sometimes the users first have to process available spatial data to obtain the needed information.

  6. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RED Database Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti...on The Rice Expression Database (RED) is a database that aggregates the gene expr...icroarray Project and other research groups. Features and manner of utilization of database

  7. Material-Point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2007-01-01

    The aim of this paper is to rest different kinds of spatial interpolation for the material-point method.......The aim of this paper is to rest different kinds of spatial interpolation for the material-point method....

  8. The Amma-Sat Database

    Science.gov (United States)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    a regular grid with a spatial resolution compatible with the spatial variability of the geophysical parameter. Data are stored in NetCDF files to facilitate their use. Satellite products can be selected using several spatial and temporal criteria and ordered through a web interface developed in PHP-MySQL. More common means of access are also available such as direct FTP or NFS access for identified users. A Live Access Server allows quick visualization of the data. A meta-data catalogue based on the Directory Interchange Format manages the documentation of each satellite product. The database is currently under development, but some products are already available. The database will be complete by the end of 2005.

  9. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the

  10. Kentucky geotechnical database.

    Science.gov (United States)

    2005-03-01

    Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...

  11. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1991-11-01

    The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability

  12. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  13. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  14. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  15. Database Urban Europe

    NARCIS (Netherlands)

    Sleutjes, B.; de Valk, H.A.G.

    2016-01-01

    Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).

  16. Interactive Multi-Instrument Database of Solar Flares

    Science.gov (United States)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  17. GeoSpark SQL: An Effective Framework Enabling Spatial Queries on Spark

    Directory of Open Access Journals (Sweden)

    Zhou Huang

    2017-09-01

    Full Text Available In the era of big data, Internet-based geospatial information services such as various LBS apps are deployed everywhere, followed by an increasing number of queries against the massive spatial data. As a result, the traditional relational spatial database (e.g., PostgreSQL with PostGIS and Oracle Spatial cannot adapt well to the needs of large-scale spatial query processing. Spark is an emerging outstanding distributed computing framework in the Hadoop ecosystem. This paper aims to address the increasingly large-scale spatial query-processing requirement in the era of big data, and proposes an effective framework GeoSpark SQL, which enables spatial queries on Spark. On the one hand, GeoSpark SQL provides a convenient SQL interface; on the other hand, GeoSpark SQL achieves both efficient storage management and high-performance parallel computing through integrating Hive and Spark. In this study, the following key issues are discussed and addressed: (1 storage management methods under the GeoSpark SQL framework, (2 the spatial operator implementation approach in the Spark environment, and (3 spatial query optimization methods under Spark. Experimental evaluation is also performed and the results show that GeoSpark SQL is able to achieve real-time query processing. It should be noted that Spark is not a panacea. It is observed that the traditional spatial database PostGIS/PostgreSQL performs better than GeoSpark SQL in some query scenarios, especially for the spatial queries with high selectivity, such as the point query and the window query. In general, GeoSpark SQL performs better when dealing with compute-intensive spatial queries such as the kNN query and the spatial join query.

  18. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  19. Querying Uncertain Data in Geospatial Object-relational Databases Using SQL and Fuzzy Sets

    Science.gov (United States)

    Ďuračiová, R.

    2013-12-01

    This paper deals with uncertainty modeling in spatial object-relational databases by the use of Structured Query Language (SQL). The fundamental principles of uncertainty modeling by fuzzy sets are applied in the area of geographic information systems (GIS) and spatial databases. A spatial database system includes types of spatial data and implements the spatial extension of SQL. The implementation of the principles of fuzzy logic to spatial databases brings an opportunity for the efficient processing of uncertain data, which is important, especially when using various data sources (e.g., multi-criteria decision making (MCDM) on the basis of heterogeneous spatial data resources). The modeling and data processing of uncertainties are presented in relation to the applicable International Organization for Standardization (ISO) standards (standards of the series 19100 Geographic information) and the relevant specifications of the Open Geospatial Consortium (OGC). The fuzzy spatial query approach is applied and tested on a case study with a fundamental database for GIS in Slovakia.

  20. Spatial Sense.

    Science.gov (United States)

    Del Grande, John

    1990-01-01

    Describes seven spatial abilities related to mathematics including eye-motor coordination, figure-ground perception, perceptual constancy, position-in-space perception, perception of spatial relationships, visual discrimination, and visual memory. Discusses the relationship of the spatial abilities to the study of geometry. Lists 19 references.…

  1. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  2. Nuclear power economic database

    International Nuclear Information System (INIS)

    Ding Xiaoming; Li Lin; Zhao Shiping

    1996-01-01

    Nuclear power economic database (NPEDB), based on ORACLE V6.0, consists of three parts, i.e., economic data base of nuclear power station, economic data base of nuclear fuel cycle and economic database of nuclear power planning and nuclear environment. Economic database of nuclear power station includes data of general economics, technique, capital cost and benefit, etc. Economic database of nuclear fuel cycle includes data of technique and nuclear fuel price. Economic database of nuclear power planning and nuclear environment includes data of energy history, forecast, energy balance, electric power and energy facilities

  3. Protein sequence databases.

    Science.gov (United States)

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium.

  4. Visualization of multidimensional database

    Science.gov (United States)

    Lee, Chung

    2008-01-01

    The concept of multidimensional databases has been extensively researched and wildly used in actual database application. It plays an important role in contemporary information technology, but due to the complexity of its inner structure, the database design is a complicated process and users are having a hard time fully understanding and using the database. An effective visualization tool for higher dimensional information system helps database designers and users alike. Most visualization techniques focus on displaying dimensional data using spreadsheets and charts. This may be sufficient for the databases having three or fewer dimensions but for higher dimensions, various combinations of projection operations are needed and a full grasp of total database architecture is very difficult. This study reviews existing visualization techniques for multidimensional database and then proposes an alternate approach to visualize a database of any dimension by adopting the tool proposed by Kiviat for software engineering processes. In this diagramming method, each dimension is represented by one branch of concentric spikes. This paper documents a C++ based visualization tool with extensive use of OpenGL graphics library and GUI functions. Detailed examples of actual databases demonstrate the feasibility and effectiveness in visualizing multidimensional databases.

  5. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPD Database Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...AGE) reference maps. Features and manner of utilization of database Proteins extracted from organs and subce

  6. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ASTRA Database Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...mes. Features and manner of utilization of database This database enables to sear...ch and represent alternative splicing/transcriptional initiation genes and their patterns (ex: cassette) base

  7. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonom...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database... is a database providing the comprehensive information of proteins that is effective t

  8. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n...ame Arabidopsis Phenome Database Alternative name - DOI 10.18908/lsdba.nbdc01509-000 Creator Creator Name: H... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database

  9. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  10. Hazard Analysis Database Report

    CERN Document Server

    Grams, W H

    2000-01-01

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from t...

  11. National Database of Geriatrics

    DEFF Research Database (Denmark)

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    AIM OF DATABASE: The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. STUDY POPULATION: The database population consists of patients who were admitted to a geriatric hospital unit....... Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14-15,000 admissions per year, and the database completeness has been stable at 90% during the past......, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. DESCRIPTIVE DATA: Descriptive patient-related data include...

  12. AMDD: Antimicrobial Drug Database

    OpenAIRE

    Danishuddin, Mohd; Kaushal, Lalima; Hassan Baig, Mohd; Khan, Asad U.

    2012-01-01

    Drug resistance is one of the major concerns for antimicrobial chemotherapy against any particular target. Knowledge of the primary structure of antimicrobial agents and their activities is essential for rational drug design. Thus, we developed a comprehensive database, anti microbial drug database (AMDD), of known synthetic antibacterial and antifungal compounds that were extracted from the available literature and other chemical databases, e.g., PubChem, PubChem BioAssay and ZINC, etc. The ...

  13. Molecular Biology Database List.

    Science.gov (United States)

    Burks, C

    1999-01-01

    Molecular Biology Database List (MBDL) includes brief descriptions and pointers to Web sites for the various databases described in this issue as well as other Web sites presenting data sets relevant to molecular biology. This information is compiled into a list (http://www.oup.co.uk/nar/Volume_27/Issue_01/summary/ gkc105_gml.html) which includes links both to source Web sites and to on-line versions of articles describing the databases. PMID:9847130

  14. The Application of an Anatomical Database for Fetal Congenital Heart Disease

    Directory of Open Access Journals (Sweden)

    Li Yang

    2015-01-01

    Conclusions: The database of fetal CHD successfully reproduced the anatomic structures and spatial relationship of different kinds of fetal CHD. This database can be widely used in anatomy and FECG teaching and training.

  15. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  16. LOWELL OBSERVATORY COMETARY DATABASE

    Data.gov (United States)

    National Aeronautics and Space Administration — The database presented here is comprised entirely of observations made utilizing conventional photoelectric photometers and narrowband filters isolating 5 emission...

  17. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  18. The Relational Database Dictionary

    CERN Document Server

    J, C

    2006-01-01

    Avoid misunderstandings that can affect the design, programming, and use of database systems. Whether you're using Oracle, DB2, SQL Server, MySQL, or PostgreSQL, The Relational Database Dictionary will prevent confusion about the precise meaning of database-related terms (e.g., attribute, 3NF, one-to-many correspondence, predicate, repeating group, join dependency), helping to ensure the success of your database projects. Carefully reviewed for clarity, accuracy, and completeness, this authoritative and comprehensive quick-reference contains more than 600 terms, many with examples, covering i

  19. Key health indicators database.

    Science.gov (United States)

    Menic, J L

    1990-01-01

    A new database developed by the Canadian Centre for Health Information (CCHI) contains 40 key health indicators and lets users select a range of disaggregations, categories and variables. The database can be accessed through CANSIM, Statistics Canada's electronic database and retrieval system, or through a package for personal computers. This package includes the database on diskettes, as well as software for retrieving and manipulating data and for producing graphics. A data dictionary, a user's guide and tables and graphs that highlight aspects of each indicator are also included.

  20. Intermodal Passenger Connectivity Database -

    Data.gov (United States)

    Department of Transportation — The Intermodal Passenger Connectivity Database (IPCD) is a nationwide data table of passenger transportation terminals, with data on the availability of connections...

  1. IVR EFP Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Exempted Fishery projects with IVR reporting requirements.

  2. Residency Allocation Database

    Data.gov (United States)

    Department of Veterans Affairs — The Residency Allocation Database is used to determine allocation of funds for residency programs offered by Veterans Affairs Medical Centers (VAMCs). Information...

  3. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  4. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  5. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  6. Database Publication Practices

    DEFF Research Database (Denmark)

    Bernstein, P.A.; DeWitt, D.; Heuer, A.

    2005-01-01

    There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems.......There has been a growing interest in improving the publication processes for database research papers. This panel reports on recent changes in those processes and presents an initial cut at historical data for the VLDB Journal and ACM Transactions on Database Systems....

  7. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  8. [Validation of interaction databases in psychopharmacotherapy].

    Science.gov (United States)

    Hahn, M; Roll, S C

    2018-03-01

    Drug-drug interaction databases are an important tool to increase drug safety in polypharmacy. There are several drug interaction databases available but it is unclear which one shows the best results and therefore increases safety for the user of the databases and the patients. So far, there has been no validation of German drug interaction databases. Validation of German drug interaction databases regarding the number of hits, mechanisms of drug interaction, references, clinical advice, and severity of the interaction. A total of 36 drug interactions which were published in the last 3-5 years were checked in 5 different databases. Besides the number of hits, it was also documented if the mechanism was correct, clinical advice was given, primary literature was cited, and the severity level of the drug-drug interaction was given. All databases showed weaknesses regarding the hit rate of the tested drug interactions, with a maximum of 67.7% hits. The highest score in this validation was achieved by MediQ with 104 out of 180 points. PsiacOnline achieved 83 points, arznei-telegramm® 58, ifap index® 54 and the ABDA-database 49 points. Based on this validation MediQ seems to be the most suitable databank for the field of psychopharmacotherapy. The best results in this comparison were achieved by MediQ but this database also needs improvement with respect to the hit rate so that the users can rely on the results and therefore increase drug therapy safety.

  9. A New Global Open Source Marine Hydrocarbon Emission Site Database

    Science.gov (United States)

    Onyia, E., Jr.; Wood, W. T.; Barnard, A.; Dada, T.; Qazzaz, M.; Lee, T. R.; Herrera, E.; Sager, W.

    2017-12-01

    Hydrocarbon emission sites (e.g. seeps) discharge large volumes of fluids and gases into the oceans that are not only important for biogeochemical budgets, but also support abundant chemosynthetic communities. Documenting the locations of modern emissions is a first step towards understanding and monitoring how they affect the global state of the seafloor and oceans. Currently, no global open source (i.e. non-proprietry) detailed maps of emissions sites are available. As a solution, we have created a database that is housed within an Excel spreadsheet and use the latest versions of Earthpoint and Google Earth for position coordinate conversions and data mapping, respectively. To date, approximately 1,000 data points have been collected from referenceable sources across the globe, and we are continualy expanding the dataset. Due to the variety of spatial extents encountered, to identify each site we used two different methods: 1) point (x, y, z) locations for individual sites and; 2) delineation of areas where sites are clustered. Certain well-known areas, such as the Gulf of Mexico and the Mediterranean Sea, have a greater abundance of information; whereas significantly less information is available in other regions due to the absence of emission sites, lack of data, or because the existing data is proprietary. Although the geographical extent of the data is currently restricted to regions where the most data is publicly available, as the database matures, we expect to have more complete coverage of the world's oceans. This database is an information resource that consolidates and organizes the existing literature on hydrocarbons released into the marine environment, thereby providing a comprehensive reference for future work. We expect that the availability of seafloor hydrocarbon emission maps will benefit scientific understanding of hydrocarbon rich areas as well as potentially aiding hydrocarbon exploration and environmental impact assessements.

  10. Representations built from a true geographic database

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    a representation based on geographic and geospatial principles. The system GRIFINOR, developed at 3DGI, Aalborg University, DK, is capable of creating this object-orientation and furthermore does this on top of a true Geographic database. A true Geographic database can be characterized as a database that can cover......) within the creation of Virtual Environments, what will be the next challenge within Urban simulation and modelling to overcome? It will certainly not be to create the models as real as possible or refine details in the texturing. The challenge will be to do a proper object-orientation and thereby secure...... the whole world in 3d and with a spatial reference given by geographic coordinates. Built on top of this is a customised viewer, based on the Xith(Java) scenegraph. The viewer reads the objects directly from the database and solves the question about Level-Of-Detail on buildings, orientation in relation...

  11. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  12. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG Database Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database... description This database contains information on the rice mitochondrial genome. You ca...sis results. Features and manner of utilization of database The mitochondrial genome information can be used

  13. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us JSNP Database Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database... classification Human Genes and Diseases - General polymorphism databases Organism Taxonomy Name: Homo ...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat... and manner of utilization of database Allele frequencies in Japanese populatoin are also available. License

  14. Visualization of spatial plans in GIS environment

    Directory of Open Access Journals (Sweden)

    Bakić Olgica

    2011-01-01

    Full Text Available This paper deals with some issues in the domain of visualization of the planning solutions, with reference to presentation needed contents on thematic and referral maps. A map is the text written by cartographical language, an unavoidable tool for presenting the plan and planning solutions. The starting point for making thematic maps are the basic postulates of traditional mapping, with use of capacities of modern technology/IT solutions. In that sense, the authors offer suggestions for improving the development of maps which accompany the plan, by using new techniques based on Geographic Information Systems (GIS. The issue is considered in the context of planning practice development, by formation and management of a unique spatial database as a prerequisite for the further implementation, updating and presentation of plans at the intra and Internet. The experiences of the Spatial Plan of the Special Purpose Area of the National Park 'Đerdap' are used as a case study. Since the development of the National park opens a number of conflicting issues of sustainability and having that defined solutions can be realized on these principles and criteria, with the hard work of all actors in the area, the complexity of conflicts and planning requirements is reflected to the contents of cartographic solutions (referral maps. The paper points out the importance of the visual appearance of cartographic representation and comments the changes in the mapping from the analog to digital.

  15. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27... Arabidopsis Phenome Database English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  16. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Upda...te History of This Database Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  17. Spatial cluster modelling

    CERN Document Server

    Lawson, Andrew B

    2002-01-01

    Research has generated a number of advances in methods for spatial cluster modelling in recent years, particularly in the area of Bayesian cluster modelling. Along with these advances has come an explosion of interest in the potential applications of this work, especially in epidemiology and genome research. In one integrated volume, this book reviews the state-of-the-art in spatial clustering and spatial cluster modelling, bringing together research and applications previously scattered throughout the literature. It begins with an overview of the field, then presents a series of chapters that illuminate the nature and purpose of cluster modelling within different application areas, including astrophysics, epidemiology, ecology, and imaging. The focus then shifts to methods, with discussions on point and object process modelling, perfect sampling of cluster processes, partitioning in space and space-time, spatial and spatio-temporal process modelling, nonparametric methods for clustering, and spatio-temporal ...

  18. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  19. IAEA Radiation Events Database (RADEV)

    International Nuclear Information System (INIS)

    Wheatley, J.; Ortiz-Lopez, P.

    2001-01-01

    Whilst the use of ionizing radiation continues to bring benefits to many people throughout the world there is increasing concern at the number of reported accidents involving radiation. Such accidents have had an impact on the lives of patients, workers and members of the public, the consequences of which have ranged from trivial health effects to fatalities. In order to reduce the number of accidents and to mitigate their consequences it is, therefore, necessary to raise awareness of the causes of accidents and to note the lessons that can be learned. The IAEA's database on unusual radiation events (RADEV) is intended to provide a world-wide focal point for such information. (author)

  20. Interconnecting heterogeneous database management systems

    Science.gov (United States)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  1. Spatial measurement errors in the field of spatial epidemiology.

    Science.gov (United States)

    Zhang, Zhijie; Manjourides, Justin; Cohen, Ted; Hu, Yi; Jiang, Qingwu

    2016-07-01

    Spatial epidemiology has been aided by advances in geographic information systems, remote sensing, global positioning systems and the development of new statistical methodologies specifically designed for such data. Given the growing popularity of these studies, we sought to review and analyze the types of spatial measurement errors commonly encountered during spatial epidemiological analysis of spatial data. Google Scholar, Medline, and Scopus databases were searched using a broad set of terms for papers indexed by a term indicating location (space or geography or location or position) and measurement error (measurement error or measurement inaccuracy or misclassification or uncertainty): we reviewed all papers appearing before December 20, 2014. These papers and their citations were reviewed to identify the relevance to our review. We were able to define and classify spatial measurement errors into four groups: (1) pure spatial location measurement errors, including both non-instrumental errors (multiple addresses, geocoding errors, outcome aggregations, and covariate aggregation) and instrumental errors; (2) location-based outcome measurement error (purely outcome measurement errors and missing outcome measurements); (3) location-based covariate measurement errors (address proxies); and (4) Covariate-Outcome spatial misaligned measurement errors. We propose how these four classes of errors can be unified within an integrated theoretical model and possible solutions were discussed. Spatial measurement errors are ubiquitous threat to the validity of spatial epidemiological studies. We propose a systematic framework for understanding the various mechanisms which generate spatial measurement errors and present practical examples of such errors.

  2. Presentation : Spatial Analysis and GEOmatics

    Directory of Open Access Journals (Sweden)

    Didier Josselin

    2006-10-01

    Full Text Available SAGEO is the annual International Conference on Spatial Analysis and GEOmatics. It aims to:  present recent and high quality research in the field of Geomatics and Spatial Analysis, bring together researchers from various disciplines, provide an exchange platform on research and development in Geomatics, on a national and international level, for public or private bodies. SAGEO is a good opportunity for two complementary research networks to meet and to discuss their points of view on spatial...

  3. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  4. HIV Structural Database

    Science.gov (United States)

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  5. Structural Ceramics Database

    Science.gov (United States)

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  6. The international spinach database

    NARCIS (Netherlands)

    Treuren, van R.; Menting, F.B.J.

    2007-01-01

    The database concentrates on passport data of spinach of germplasm collections worldwide. All available passport data of accessions included in the International Spinach Database are downloadable as zipped Excel file. This zip file also contains the decoding tables, except for the FAO institutes

  7. Directory of IAEA databases

    International Nuclear Information System (INIS)

    1992-12-01

    This second edition of the Directory of IAEA Databases has been prepared within the Division of Scientific and Technical Information (NESI). Its main objective is to describe the computerized information sources available to staff members. This directory contains all databases produced at the IAEA, including databases stored on the mainframe, LAN's and PC's. All IAEA Division Directors have been requested to register the existence of their databases with NESI. For the second edition database owners were requested to review the existing entries for their databases and answer four additional questions. The four additional questions concerned the type of database (e.g. Bibliographic, Text, Statistical etc.), the category of database (e.g. Administrative, Nuclear Data etc.), the available documentation and the type of media used for distribution. In the individual entries on the following pages the answers to the first two questions (type and category) is always listed, but the answers to the second two questions (documentation and media) is only listed when information has been made available

  8. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  9. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996......a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  10. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...

  11. Consumer Product Category Database

    Science.gov (United States)

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  12. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  13. Architectural Implications for Spatial Object Association Algorithms*

    Science.gov (United States)

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  14. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  15. Spatial Operations

    Directory of Open Access Journals (Sweden)

    Anda VELICANU

    2010-09-01

    Full Text Available This paper contains a brief description of the most important operations that can be performed on spatial data such as spatial queries, create, update, insert, delete operations, conversions, operations on the map or analysis on grid cells. Each operation has a graphical example and some of them have code examples in Oracle and PostgreSQL.

  16. Spatializing Time

    DEFF Research Database (Denmark)

    Thomsen, Bodil Marie Stavning

    2011-01-01

    The article analyses some of artist Søren Lose's photographic installations in which time, history and narration is reflected in the creation of allegoric, spatial relations.......The article analyses some of artist Søren Lose's photographic installations in which time, history and narration is reflected in the creation of allegoric, spatial relations....

  17. Spatial Analysis of the National Evaluation of Scholastic Achievement (ENLACE in Schools of the Municipality of Juarez, Chihuahua

    Directory of Open Access Journals (Sweden)

    Luis Ernesto Cervera Gómez

    2008-05-01

    Full Text Available This research was focused on analyzing the results of the first National Assessment of Academic Achievement for Scholar Centers (ENLACE; acronym in Spanish applied during the year 2006 in the Municipality of Juarez (State of Chihuahua, Mexico. In order to conduct the spatial analysis a geographical information system (GIS was used to make a georeferenced database were all variables were connected to a point representing a school. Results of the examinations expressed as deficient, elemental, good en excellent were spatially distributed over the urban area of Ciudad Juárez. Apparently there is a high spatial correlation between ENLACE’s results with the socioeconomic level of people. In this way results going from good to excellent were spatially located over the sectors more developed of the city. Poor results going from Insufficient to Elemental were spatially located at places with higher deficits of infrastructure and low socioeconomic levels.

  18. Toward a CFD-grade database addressing LWR containment phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Paladino, Domenico, E-mail: domenico.paladino@psi.ch [Laboratory for Thermal-Hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Andreani, Michele; Zboray, Robert; Dreier, Joerg [Laboratory for Thermal-Hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer The SETH-2 PANDA tests have supplied data with CFD-grade on plumes and jets at large-scale. Black-Right-Pointing-Pointer The PANDA tests have contributed to the understanding of phenomena with high safety relevance for LWRs. Black-Right-Pointing-Pointer The analytical activities related increased confidence in the use of various computational tools for safety analysis. - Abstract: The large-scale, multi-compartment PANDA facility (located at PSI in Switzerland) is one of the state-of-the-art facilities which is continuously upgraded to progressively match the requirements of CFD-grade experiments. Within the OECD/SETH projects, the PANDA facility has been used for the creation of an experimental database on basic containment phenomena e.g. gas mixing, transport, stratification, condensation. In the PANDA tests, these phenomena are driven by large scale plumes or jets. In the paper is presented a selection of the SETH PANDA experimental results. Examples of analytical activities performed at PSI using the GOTHIC, CFX-4 and CFX-5 codes will be used to illustrate how the spatial and temporal resolutions of the measurement grid in PANDA tests are adequate for CFD code (and advanced containment codes) assessment and validation purposes.

  19. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DGBY Database Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...sion and function of Baker's yeast). Features and manner of utilization of database This database

  20. Detecting Spatial Patterns of Natural Hazards from the Wikipedia Knowledge Base

    Science.gov (United States)

    Fan, J.; Stewart, K.

    2015-07-01

    The Wikipedia database is a data source of immense richness and variety. Included in this database are thousands of geotagged articles, including, for example, almost real-time updates on current and historic natural hazards. This includes usercontributed information about the location of natural hazards, the extent of the disasters, and many details relating to response, impact, and recovery. In this research, a computational framework is proposed to detect spatial patterns of natural hazards from the Wikipedia database by combining topic modeling methods with spatial analysis techniques. The computation is performed on the Neon Cluster, a high performance-computing cluster at the University of Iowa. This work uses wildfires as the exemplar hazard, but this framework is easily generalizable to other types of hazards, such as hurricanes or flooding. Latent Dirichlet Allocation (LDA) modeling is first employed to train the entire English Wikipedia dump, transforming the database dump into a 500-dimension topic model. Over 230,000 geo-tagged articles are then extracted from the Wikipedia database, spatially covering the contiguous United States. The geo-tagged articles are converted into an LDA topic space based on the topic model, with each article being represented as a weighted multidimension topic vector. By treating each article's topic vector as an observed point in geographic space, a probability surface is calculated for each of the topics. In this work, Wikipedia articles about wildfires are extracted from the Wikipedia database, forming a wildfire corpus and creating a basis for the topic vector analysis. The spatial distribution of wildfire outbreaks in the US is estimated by calculating the weighted sum of the topic probability surfaces using a map algebra approach, and mapped using GIS. To provide an evaluation of the approach, the estimation is compared to wildfire hazard potential maps created by the USDA Forest service.

  1. Geologic database for digital geology of California, Nevada, and Utah: an application of the North American Data Model

    Science.gov (United States)

    Bedford, David R.; Ludington, Steve; Nutt, Constance M.; Stone, Paul A.; Miller, David M.; Miller, Robert J.; Wagner, David L.; Saucedo, George J.

    2003-01-01

    The USGS is creating an integrated national database for digital state geologic maps that includes stratigraphic, age, and lithologic information. The majority of the conterminous 48 states have digital geologic base maps available, often at scales of 1:500,000. This product is a prototype, and is intended to demonstrate the types of derivative maps that will be possible with the national integrated database. This database permits the creation of a number of types of maps via simple or sophisticated queries, maps that may be useful in a number of areas, including mineral-resource assessment, environmental assessment, and regional tectonic evolution. This database is distributed with three main parts: a Microsoft Access 2000 database containing geologic map attribute data, an Arc/Info (Environmental Systems Research Institute, Redlands, California) Export format file containing points representing designation of stratigraphic regions for the Geologic Map of Utah, and an ArcView 3.2 (Environmental Systems Research Institute, Redlands, California) project containing scripts and dialogs for performing a series of generalization and mineral resource queries. IMPORTANT NOTE: Spatial data for the respective stage geologic maps is not distributed with this report. The digital state geologic maps for the states involved in this report are separate products, and two of them are produced by individual state agencies, which may be legally and/or financially responsible for this data. However, the spatial datasets for maps discussed in this report are available to the public. Questions regarding the distribution, sale, and use of individual state geologic maps should be sent to the respective state agency. We do provide suggestions for obtaining and formatting the spatial data to make it compatible with data in this report. See section ‘Obtaining and Formatting Spatial Data’ in the PDF version of the report.

  2. Function Point Analysis Depot

    Science.gov (United States)

    Muniz, R.; Martinez, El; Szafran, J.; Dalton, A.

    2011-01-01

    The Function Point Analysis (FPA) Depot is a web application originally designed by one of the NE-C3 branch's engineers, Jamie Szafran, and created specifically for the Software Development team of the Launch Control Systems (LCS) project. The application consists of evaluating the work of each developer to be able to get a real estimate of the hours that is going to be assigned to a specific task of development. The Architect Team had made design change requests for the depot to change the schema of the application's information; that information, changed in the database, needed to be changed in the graphical user interface (GUI) (written in Ruby on Rails (RoR and the web service/server side in Java to match the database changes. These changes were made by two interns from NE-C, Ricardo Muniz from NE-C3, who made all the schema changes for the GUI in RoR and Edwin Martinez, from NE-C2, who made all the changes in the Java side.

  3. Hazard Analysis Database Report

    Energy Technology Data Exchange (ETDEWEB)

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  4. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  5. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  6. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  7. Soil organic carbon stocks in Alaska estimated with spatial and pedon data

    Science.gov (United States)

    Bliss, Norman B.; Maursetter, J.

    2010-01-01

    Temperatures in high-latitude ecosystems are increasing faster than the average rate of global warming, which may lead to a positive feedback for climate change by increasing the respiration rates of soil organic C. If a positive feedback is confirmed, soil C will represent a source of greenhouse gases that is not currently considered in international protocols to regulate C emissions. We present new estimates of the stocks of soil organic C in Alaska, calculated by linking spatial and field data developed by the USDA NRCS. The spatial data are from the State Soil Geographic database (STATSGO), and the field and laboratory data are from the National Soil Characterization Database, also known as the pedon database. The new estimates range from 32 to 53 Pg of soil organic C for Alaska, formed by linking the spatial and field data using the attributes of Soil Taxonomy. For modelers, we recommend an estimation method based on taxonomic subgroups with interpolation for missing areas, which yields an estimate of 48 Pg. This is a substantial increase over a magnitude of 13 Pg estimated from only the STATSGO data as originally distributed in 1994, but the increase reflects different estimation methods and is not a measure of the change in C on the landscape. Pedon samples were collected between 1952 and 2002, so the results do not represent a single point in time. The linked databases provide an improved basis for modeling the impacts of climate change on net ecosystem exchange.

  8. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SAHG Database Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...Protein structure Human and other Vertebrate Genomes - Human ORFs Protein sequence database...s - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description...42,577 domain-structure models in ~24900 unique human protein sequences from the RefSeq database. Features a

  9. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Database Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database... description The purpose of this database is to represent the relationship between p... Features and manner of utilization of database - License CC BY-SA Detail Background and funding - Reference...(s) Article title: PSCDB: a database for protein structural change upon ligand binding. Author name(s): T. A

  10. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE Database Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database... description PLACE is a database of motifs found in plant cis-acting regulatory DNA elements base...that have been identified in these motifs in other genes or in other plant species in later publications. The database

  11. Functionally Graded Materials Database

    Science.gov (United States)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  12. Marine Jurisdictions Database

    National Research Council Canada - National Science Library

    Goldsmith, Roger

    1998-01-01

    The purpose of this project was to take the data gathered for the Maritime Claims chart and create a Maritime Jurisdictions digital database suitable for use with oceanographic mission planning objectives...

  13. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  14. Children's Culture Database (CCD)

    DEFF Research Database (Denmark)

    Wanting, Birgit

    a Dialogue inspired database with documentation, network (individual and institutional profiles) and current news , paper presented at the research seminar: Electronic access to fiction, Copenhagen, November 11-13, 1996...

  15. The Danish Melanoma Database

    DEFF Research Database (Denmark)

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva

    2016-01-01

    AIM OF DATABASE: The aim of the database is to monitor and improve the treatment and survival of melanoma patients. STUDY POPULATION: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive......-node-metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a), results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence......, nature, and treatment hereof is registered. In case of death, the cause and date are included. Currently, all data are entered manually; however, data catchment from the existing registries is planned to be included shortly. DESCRIPTIVE DATA: The DMD is an old research database, but new as a clinical...

  16. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation......, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database...

  17. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... for gynecological cancer. STUDY POPULATION: DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. MAIN VARIABLES: DGCD data are organized within separate...... is the registration of oncological treatment data, which is incomplete for a large number of patients. CONCLUSION: The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation...

  18. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  19. Atomicity for XML Databases

    Science.gov (United States)

    Biswas, Debmalya; Jiwane, Ashwin; Genest, Blaise

    With more and more data stored into XML databases, there is a need to provide the same level of failure resilience and robustness that users have come to expect from relational database systems. In this work, we discuss strategies to provide the transactional aspect of atomicity to XML databases. The main contribution of this paper is to propose a novel approach for performing updates-in-place on XML databases, with the undo statements stored in the same high level language as the update statements. Finally, we give experimental results to study the performance/storage trade-off of the updates-in-place strategy (based on our undo proposal) against the deferred updates strategy to providing atomicity.

  20. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  1. Ganymede Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 150 major impact craters on Ganymede and is updated semi-regularly based on continuing analysis...

  2. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  3. Toxicity Reference Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested...

  4. Records Management Database

    Data.gov (United States)

    US Agency for International Development — The Records Management Database is tool created in Microsoft Access specifically for USAID use. It contains metadata in order to access and retrieve the information...

  5. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  6. Database for West Africa

    African Journals Online (AJOL)

    NCRS USDA English Morphology and analytical. ISIS ISRIC English ..... problems. The compilation of the database cannot be carried out without adequate funding It also needs a strong and firm management. It is important that all participants ...

  7. Venus Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 900 or so impact craters on the surface of Venus by diameter, latitude, and name.

  8. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  9. Drycleaner Database - Region 7

    Data.gov (United States)

    U.S. Environmental Protection Agency — THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify...

  10. National Assessment Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The National Assessment Database stores and tracks state water quality assessment decisions, Total Maximum Daily Loads (TMDLs) and other watershed plans designed to...

  11. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  12. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  13. Global Volcano Locations Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...

  14. IVR RSA Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database contains trip-level reports submitted by vessels participating in Research Set-Aside projects with IVR reporting requirements.

  15. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  16. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  17. Food Habits Database (FHDBS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NEFSC Food Habits Database has two major sources of data. The first, and most extensive, is the standard NEFSC Bottom Trawl Surveys Program. During these...

  18. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  19. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  20. Callisto Crater Database

    Data.gov (United States)

    National Aeronautics and Space Administration — This web page leads to a database of images and information about the 150 major impact craters on Callisto and is updated semi-regularly based on continuing analysis...

  1. Uranium Location Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata...

  2. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  3. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  4. The Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Guldberg, Rikke; Brostrøm, Søren; Hansen, Jesper Kjær

    2013-01-01

    INTRODUCTION AND HYPOTHESIS: The Danish Urogynaecological Database (DugaBase) is a nationwide clinical database established in 2006 to monitor, ensure and improve the quality of urogynaecological surgery. We aimed to describe its establishment and completeness and to validate selected variables....... This is the first study based on data from the DugaBase. METHODS: The database completeness was calculated as a comparison between urogynaecological procedures reported to the Danish National Patient Registry and to the DugaBase. Validity was assessed for selected variables from a random sample of 200 women...... in the DugaBase from 1 January 2009 to 31 October 2010, using medical records as a reference. RESULTS: A total of 16,509 urogynaecological procedures were registered in the DugaBase by 31 December 2010. The database completeness has increased by calendar time, from 38.2 % in 2007 to 93.2 % in 2010 for public...

  5. OTI Activity Database

    Data.gov (United States)

    US Agency for International Development — OTI's worldwide activity database is a simple and effective information system that serves as a program management, tracking, and reporting tool. In each country,...

  6. Databases of surface wave dispersion

    Directory of Open Access Journals (Sweden)

    L. Boschi

    2005-06-01

    Full Text Available Observations of seismic surface waves provide the most important constraint on the elastic properties of the Earth’s lithosphere and upper mantle. Two databases of fundamental mode surface wave dispersion were recently compiled and published by groups at Harvard (Ekström et al., 1997 and Utrecht/Oxford (Trampert and Woodhouse, 1995, 2001, and later employed in 3-d global tomographic studies. Although based on similar sets of seismic records, the two databases show some significant discrepancies. We derive phase velocity maps from both, and compare them to quantify the discrepancies and assess the relative quality of the data; in this endeavour, we take careful account of the effects of regularization and parametrization. At short periods, where Love waves are mostly sensitive to crustal structure and thickness, we refer our comparison to a map of the Earth’s crust derived from independent data. On the assumption that second-order effects like seismic anisotropy and scattering can be neglected, we find the measurements of Ekström et al. (1997 of better quality; those of Trampert and Woodhouse (2001 result in phase velocity maps of much higher spatial frequency and, accordingly, more difficult to explain and justify geophysically. The discrepancy is partly explained by the more conservative a priori selection of data implemented by Ekström et al. (1997. Nevertheless, it becomes more significant with decreasing period, which indicates that it could also be traced to the different measurement techniques employed by the authors.

  7. Database on Wind Characteristics

    DEFF Research Database (Denmark)

    Højstrup, J.; Ejsing Jørgensen, Hans; Lundtang Petersen, Erik

    1999-01-01

    his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061......his report describes the work and results of the project: Database on Wind Characteristics which was sponsered partly by the European Commision within the framework of JOULE III program under contract JOR3-CT95-0061...

  8. Latent myofascial trigger points.

    Science.gov (United States)

    Ge, Hong-You; Arendt-Nielsen, Lars

    2011-10-01

    A latent myofascial trigger point (MTP) is defined as a focus of hyperirritability in a muscle taut band that is clinically associated with local twitch response and tenderness and/or referred pain upon manual examination. Current evidence suggests that the temporal profile of the spontaneous electrical activity at an MTP is similar to focal muscle fiber contraction and/or muscle cramp potentials, which contribute significantly to the induction of local tenderness and pain and motor dysfunctions. This review highlights the potential mechanisms underlying the sensory-motor dysfunctions associated with latent MTPs and discusses the contribution of central sensitization associated with latent MTPs and the MTP network to the spatial propagation of pain and motor dysfunctions. Treating latent MTPs in patients with musculoskeletal pain may not only decrease pain sensitivity and improve motor functions, but also prevent latent MTPs from transforming into active MTPs, and hence, prevent the development of myofascial pain syndrome.

  9. Critical Points of Contact

    DEFF Research Database (Denmark)

    Jensen, Ole B.; Wind, Simon; Lanng, Ditte Bendix

    2012-01-01

    student studios made at the 1st semester of the Urban Design Master Programme in the fall of 2009 and 2010. The CPC concept is double edged since it both provides the stepping-stone for analysis as well as scaffolding for intervention and re-design. Thereby, it fits the underlying philosophy of teaching......In this brief article, we shall illustrate the application of the analytical and interventionist concept of ‘Critical Points of Contact’ (CPC) through a number of urban design studios. The notion of CPC has been developed over a span of the last three to four years and is reported in more detail...... elsewhere (Jensen & Morelli 2011). In this article, we will only discuss the conceptual and theoretical framing superficially, since our real interest is to show and discuss the concept's application value to spatial design in a number of urban design studios. The 'data' or the projects presented are seven...

  10. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  11. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  12. THE INTERNET PRESENTATION OF DATABASES OF GLACIERS OF THE SOUTH OF EASTERN SIBERIA

    Directory of Open Access Journals (Sweden)

    A. D. Kitov

    2017-01-01

    Full Text Available The authors consider the technology for creating databases of glaciers in Southern Siberia and the presentation of these databases on the Internet. The technology consists in the recognition and vectorization of spatial, multi-temporal data using GIS techniques, followed by the formation of databases that reflect the spatial and temporal variation of nival-glacial formations. The results of GIS design are presented on the website IG SB RAS and with the help of Internet service ArcGISonline on the public map. The mapping of databases shows the dynamic of nival-glacial formations for three time phases: the beginning of the 20th century (if you have data, its middle (the catalogs of glaciers and topographic maps and the beginning of the 21st century (according to satellite images and field research. Graphic objects are represented as point, line, and polygonal GIS-themes. Point-themes indicate parameters such as the center, lower and upper boundaries of the glacier. Line-themes determine the length and perimeter of the glacier. Polygonal-themes define the contour of the glacier and its area. The attributive table corresponds to the international standard World Glacier Inventory (WGI. The contours of the glaciers of northern Asia are represented conditionally (ellipses at international portals, and attribute characteristics correspond to the state that was displayed in catalogs of glaciers of the USSR, and they are inaccurate. Considered databases are devoid of these shortcomings. Coordinates of the center of glaciers have been refined. Glaciers contours have boundaries, appropriate to space images or topographic maps, in shp-file format. New glaciers of Baikalskiy and Barguzinskiy ridges are also presented. Existing catalogs and databases still do not include these glaciers. Features of the glaciers are examined in the context of the latitudinal transect of southern Siberia, from the Kodar ridge to the Eastern Sayan. GIS-analysis of the Databases

  13. Point pattern analysis applied to flood and landslide damage events in Switzerland (1972-2009)

    Science.gov (United States)

    Barbería, Laura; Schulte, Lothar; Carvalho, Filipe; Peña, Juan Carlos

    2017-04-01

    Damage caused by meteorological and hydrological extreme events depends on many factors, not only on hazard, but also on exposure and vulnerability. In order to reach a better understanding of the relation of these complex factors, their spatial pattern and underlying processes, the spatial dependency between values of damage recorded at sites of different distances can be investigated by point pattern analysis. For the Swiss flood and landslide damage database (1972-2009) first steps of point pattern analysis have been carried out. The most severe events have been selected (severe, very severe and catastrophic, according to GEES classification, a total number of 784 damage points) and Ripley's K-test and L-test have been performed, amongst others. For this purpose, R's library spatstat has been used. The results confirm that the damage points present a statistically significant clustered pattern, which could be connected to prevalence of damages near watercourses and also to rainfall distribution of each event, together with other factors. On the other hand, bivariate analysis shows there is no segregated pattern depending on process type: flood/debris flow vs landslide. This close relation points to a coupling between slope and fluvial processes, connectivity between small-size and middle-size catchments and the influence of spatial distribution of precipitation, temperature (snow melt and snow line) and other predisposing factors such as soil moisture, land-cover and environmental conditions. Therefore, further studies will investigate the relationship between the spatial pattern and one or more covariates, such as elevation, distance from watercourse or land use. The final goal will be to perform a regression model to the data, so that the adjusted model predicts the intensity of the point process as a function of the above mentioned covariates.

  14. Saccharomyces genome database: underlying principles and organisation.

    Science.gov (United States)

    Dwight, Selina S; Balakrishnan, Rama; Christie, Karen R; Costanzo, Maria C; Dolinski, Kara; Engel, Stacia R; Feierbach, Becket; Fisk, Dianna G; Hirschman, Jodi; Hong, Eurie L; Issel-Tarver, Laurie; Nash, Robert S; Sethuraman, Anand; Starr, Barry; Theesfeld, Chandra L; Andrada, Rey; Binkley, Gail; Dong, Qing; Lane, Christopher; Schroeder, Mark; Weng, Shuai; Botstein, David; Cherry, J Michael

    2004-03-01

    A scientific database can be a powerful tool for biologists in an era where large-scale genomic analysis, combined with smaller-scale scientific results, provides new insights into the roles of genes and their products in the cell. However, the collection and assimilation of data is, in itself, not enough to make a database useful. The data must be incorporated into the database and presented to the user in an intuitive and biologically significant manner. Most importantly, this presentation must be driven by the user's point of view; that is, from a biological perspective. The success of a scientific database can therefore be measured by the response of its users - statistically, by usage numbers and, in a less quantifiable way, by its relationship with the community it serves and its ability to serve as a model for similar projects. Since its inception ten years ago, the Saccharomyces Genome Database (SGD) has seen a dramatic increase in its usage, has developed and maintained a positive working relationship with the yeast research community, and has served as a template for at least one other database. The success of SGD, as measured by these criteria, is due in large part to philosophies that have guided its mission and organisation since it was established in 1993. This paper aims to detail these philosophies and how they shape the organisation and presentation of the database.

  15. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  16. The 3-D global spatial data model foundation of the spatial data infrastructure

    CERN Document Server

    Burkholder, Earl F

    2008-01-01

    Traditional methods for handling spatial data are encumbered by the assumption of separate origins for horizontal and vertical measurements. Modern measurement systems operate in a 3-D spatial environment. The 3-D Global Spatial Data Model: Foundation of the Spatial Data Infrastructure offers a new model for handling digital spatial data, the global spatial data model or GSDM. The GSDM preserves the integrity of three-dimensional spatial data while also providing additional benefits such as simpler equations, worldwide standardization, and the ability to track spatial data accuracy with greater specificity and convenience. This groundbreaking spatial model incorporates both a functional model and a stochastic model to connect the physical world to the ECEF rectangular system. Combining horizontal and vertical data into a single, three-dimensional database, this authoritative monograph provides a logical development of theoretical concepts and practical tools that can be used to handle spatial data mo...

  17. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KOME Database Description General information of database Database name KOME Alternative name Knowledge-base... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...ngth cDNA project is shown in the database. The full-length cDNA clones were collected from various tissues ...treated under various stress conditions. The database contains not only information about complete nucleotid

  18. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    Jervis, P.; Rutter, P.

    1988-09-01

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  19. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  20. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data. The datab......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...... in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed....

  1. Spatially enabling the health sector

    Directory of Open Access Journals (Sweden)

    Tarun Stephen Weeramanthri

    2016-11-01

    Full Text Available Spatial information describes the physical location of either people or objects, and the measured relationships between them. In this article we offer the view that greater utilisation of spatial information and its related technology, as part of a broader redesign of the architecture of health information at local and national levels, could assist and speed up the process of health reform, which is taking place across the globe in richer and poorer countries alike.In making this point, we describe the impetus for health sector reform, recent developments in spatial information and analytics, and current Australasian spatial health research. We highlight examples of uptake of spatial information by the health sector, as well as missed opportunities. Our recommendations to spatially enable the health sector are applicable to high and low-resource settings.

  2. Spatiotemporal Compression Techniques for Moving Point Objects

    NARCIS (Netherlands)

    Meratnia, Nirvana; de By, R.A.; de By, R.A.; Bertino, E.

    Moving object data handling has received a fair share of attention over recent years in the spatial database community. This is understandable as positioning technology is rapidly making its way into the consumer market, not only through the already ubiquitous cell phone but soon also through small,

  3. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi...cus Taxonomy ID: 10116 Database description On the pathological image database, over 53,000 high-resolution

  4. Spatial regression analysis of traffic crashes in Seoul.

    Science.gov (United States)

    Rhee, Kyoung-Ah; Kim, Joon-Ki; Lee, Young-ihn; Ulfarsson, Gudmundur F

    2016-06-01

    Traffic crashes can be spatially correlated events and the analysis of the distribution of traffic crash frequency requires evaluation of parameters that reflect spatial properties and correlation. Typically this spatial aspect of crash data is not used in everyday practice by planning agencies and this contributes to a gap between research and practice. A database of traffic crashes in Seoul, Korea, in 2010 was developed at the traffic analysis zone (TAZ) level with a number of GIS developed spatial variables. Practical spatial models using available software were estimated. The spatial error model was determined to be better than the spatial lag model and an ordinary least squares baseline regression. A geographically weighted regression model provided useful insights about localization of effects. The results found that an increased length of roads with speed limit below 30 km/h and a higher ratio of residents below age of 15 were correlated with lower traffic crash frequency, while a higher ratio of residents who moved to the TAZ, more vehicle-kilometers traveled, and a greater number of access points with speed limit difference between side roads and mainline above 30 km/h all increased the number of traffic crashes. This suggests, for example, that better control or design for merging lower speed roads with higher speed roads is important. A key result is that the length of bus-only center lanes had the largest effect on increasing traffic crashes. This is important as bus-only center lanes with bus stop islands have been increasingly used to improve transit times. Hence the potential negative safety impacts of such systems need to be studied further and mitigated through improved design of pedestrian access to center bus stop islands. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Analyzing algorithms for nonlinear and spatially nonuniform phase shifts in the liquid crystal point diffraction interferometer. 1998 summer research program for high school juniors at the University of Rochester's Laboratory for Laser Energetics. Student research reports

    International Nuclear Information System (INIS)

    Jain, N.

    1999-03-01

    Phase-shifting interferometry has many advantages, and the phase shifting nature of the Liquid Crystal Point Diffraction Interferometer (LCPDI) promises to provide significant improvement over other current OMEGA wavefront sensors. However, while phase-shifting capabilities improve its accuracy as an interferometer, phase-shifting itself introduces errors. Phase-shifting algorithms are designed to eliminate certain types of phase-shift errors, and it is important to chose an algorithm that is best suited for use with the LCPDI. Using polarization microscopy, the authors have observed a correlation between LC alignment around the microsphere and fringe behavior. After designing a procedure to compare phase-shifting algorithms, they were able to predict the accuracy of two particular algorithms through computer modeling of device-specific phase shift-errors

  6. Danish Pancreatic Cancer Database

    DEFF Research Database (Denmark)

    Fristrup, Claus; Detlefsen, Sönke; Palnæs Hansen, Carsten

    2016-01-01

    AIM OF DATABASE: The Danish Pancreatic Cancer Database aims to prospectively register the epidemiology, diagnostic workup, diagnosis, treatment, and outcome of patients with pancreatic cancer in Denmark at an institutional and national level. STUDY POPULATION: Since May 1, 2011, all patients...... with microscopically verified ductal adenocarcinoma of the pancreas have been registered in the database. As of June 30, 2014, the total number of patients registered was 2,217. All data are cross-referenced with the Danish Pathology Registry and the Danish Patient Registry to ensure the completeness of registrations....... MAIN VARIABLES: The main registered variables are patient demographics, performance status, diagnostic workup, histological and/or cytological diagnosis, and clinical tumor stage. The following data on treatment are registered: type of operation, date of first adjuvant, neoadjuvant, and first...

  7. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  8. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  9. The Danish Sarcoma Database

    DEFF Research Database (Denmark)

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    AIM: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. STUDY POPULATION: Patients in Denmark diagnosed with a sarcoma, both...... skeletal and ekstraskeletal, are to be registered since 2009. MAIN VARIABLES: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor...... of Diseases - tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. CONCLUSION: The Danish Sarcoma Database is population based and includes sarcomas occurring...

  10. The PROSITE database.

    Science.gov (United States)

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S; Pagni, Marco; Sigrist, Christian J A

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at http://www.expasy.org/prosite/.

  11. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data....... These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...

  12. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  13. DistiLD Database

    DEFF Research Database (Denmark)

    Palleja, Albert; Horn, Heiko; Eliasson, Sabrina

    2012-01-01

    Genome-wide association studies (GWAS) have identified thousands of single nucleotide polymorphisms (SNPs) associated with the risk of hundreds of diseases. However, there is currently no database that enables non-specialists to answer the following simple questions: which SNPs associated...... blocks, so that SNPs in LD with each other are preferentially in the same block, whereas SNPs not in LD are in different blocks. By projecting SNPs and genes onto LD blocks, the DistiLD database aims to increase usage of existing GWAS results by making it easy to query and visualize disease......-associated SNPs and genes in their chromosomal context. The database is available at http://distild.jensenlab.org/....

  14. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  15. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  16. Spatial and temporal distribution of geophysical disasters

    Directory of Open Access Journals (Sweden)

    Cvetković Vladimir

    2013-01-01

    Full Text Available Natural disasters of all kinds (meteorological, hydrological, geophysical, climatological and biological are increasingly becoming part of everyday life of modern human. The consequences are often devastating, to the life, health and property of people, as well to the security of states and the entire international regions. In this regard, we noted the need for a comprehensive investigation of the phenomenology of natural disasters. In addition, it is particularly important to pay attention to the different factors that might correlate with each other to indicate more dubious and more original facts about their characteristics. However, as the issue of natural disasters is very wide, the subject of this paper will be forms, consequences, temporal and spatial distribution of geophysical natural disasters, while analysis of other disasters will be the subject of our future research. Using an international database on natural disasters of the centre for research on the epidemiology of disasters (CRED based in Brussels, with the support of the statistical analysis (SPSS, we tried to point out the number, trends, consequences, the spatial and temporal distribution of earthquakes, volcanic eruptions and dry mass movements in the world, from 1900 to 2013.

  17. C# Database Basics

    CERN Document Server

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  18. The Danish Anaesthesia Database

    DEFF Research Database (Denmark)

    Antonsen, Kristian; Rosenstock, Charlotte Vallentin; Lundstrøm, Lars Hyldborg

    2016-01-01

    AIM OF DATABASE: The aim of the Danish Anaesthesia Database (DAD) is the nationwide collection of data on all patients undergoing anesthesia. Collected data are used for quality assurance, quality development, and serve as a basis for research projects. STUDY POPULATION: The DAD was founded in 2004...... direct patient-related lifestyle factors enabling a quantification of patients' comorbidity as well as variables that are strictly related to the type, duration, and safety of the anesthesia. Data and specific data combinations can be extracted within each department in order to monitor patient treatment...

  19. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  20. The Danish Depression Database

    DEFF Research Database (Denmark)

    Videbech, Poul Bror Hemming; Deleuran, Anette

    2016-01-01

    AIM OF DATABASE: The purpose of the Danish Depression Database (DDD) is to monitor and facilitate the improvement of the quality of the treatment of depression in Denmark. Furthermore, the DDD has been designed to facilitate research. STUDY POPULATION: Inpatients as well as outpatients...... with depression, aged above 18 years, and treated in the public psychiatric hospital system were enrolled. MAIN VARIABLES: Variables include whether the patient has been thoroughly somatically examined and has been interviewed about the psychopathology by a specialist in psychiatry. The Hamilton score as well...

  1. Yucca Mountain digital database

    International Nuclear Information System (INIS)

    Daudt, C.R.; Hinze, W.J.

    1992-01-01

    This paper discusses the Yucca Mountain Digital Database (DDB) which is a digital, PC-based geographical database of geoscience-related characteristics of the proposed high-level waste (HLW) repository site of Yucca Mountain, Nevada. It was created to provide the US Nuclear Regulatory Commission's (NRC) Advisory Committee on Nuclear Waste (ACNW) and its staff with a visual perspective of geological, geophysical, and hydrological features at the Yucca Mountain site as discussed in the Department of Energy's (DOE) pre-licensing reports

  2. Database Management System

    Science.gov (United States)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  3. Rett networked database

    DEFF Research Database (Denmark)

    Grillo, Elisa; Villard, Laurent; Clarke, Angus

    2012-01-01

    underlie some (usually variant) cases. There is only limited correlation between genotype and phenotype. The Rett Networked Database (http://www.rettdatabasenetwork.org/) has been established to share clinical and genetic information. Through an "adaptor" process of data harmonization, a set of 293...... clinical items and 16 genetic items was generated; 62 clinical and 7 genetic items constitute the core dataset; 23 clinical items contain longitudinal information. The database contains information on 1838 patients from 11 countries (December 2011), with or without mutations in known genes. These numbers...

  4. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    , and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples.......Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval...

  5. Surgery Risk Assessment (SRA) Database

    Data.gov (United States)

    Department of Veterans Affairs — The Surgery Risk Assessment (SRA) database is part of the VA Surgical Quality Improvement Program (VASQIP). This database contains assessments of selected surgical...

  6. Spatial patterns of heavy metal contamination by urbanization

    Science.gov (United States)

    Delbecque, Nele; Verdoodt, Ann

    2015-04-01

    Source identification is an important step towards predictive models of urban heavy metal (HM) contamination. This study assesses the spatial distribution of enrichment of eight HMs (As, Cd, Cr, Cu, Hg, Ni, Pb and Zn) in the city of Ghent (156.18 km2; Belgium). A database with HM concentrations measured in the topsoil at 2138 point observations was collected from the Public Waste Agency of Flanders. The degree of anthropogenic HM enrichment was quantified using an urban pollution index (PI). Enrichment of HMs showed high variations throughout the study area due to manifold anthropogenic sources. Topsoil in Ghent was especially enriched with Cu, Ni, Pb and Zn, with median PI's of 1.91, 1.74, 2.12 and 2.02 respectively. Contrastingly, As, Cd, Hg, Cr generally did not exceed expected background concentrations, with median PI values agriculture, park and recreation, residential zones, harbor and industry) generally revealed high enrichment of Cu, Ni, Pb and Zn in residential areas linked to housing and traffic, but proved unsatisfactory to capture major trends in urban spatial HM distributions. Moreover, an important control of industrial and traffic emissions is suggested for Ni, Cu, Pb and Zn. Industrial non-airborne point source contaminations were mainly historical, rather than linked to current industrial activities. Results indicated that urban-rural gradients or current land use stratification approaches are inadequate to predict spatial HM distributions in cities with a long history of industrialization.

  7. Evaluation of device-independent internet spatial location

    OpenAIRE

    Komosný, Dan; Pang, Paul; Mehic, Miralem; Vozňák, Miroslav

    2017-01-01

    Device-independent Internet spatial location is needed for many purposes, such as data personalisation and social behaviour analysis. Internet spatial databases provide such locations based the IP address of a device. The free to use databases are natively included into many UNIX and Linux operating systems. These systems are predominantly used for e-shops, social networks, and cloud data storage. Using a constructed ground truth dataset, we comprehensively evaluate these databases for null r...

  8. Rasdaman for Big Spatial Raster Data

    Science.gov (United States)

    Hu, F.; Huang, Q.; Scheele, C. J.; Yang, C. P.; Yu, M.; Liu, K.

    2015-12-01

    Spatial raster data have grown exponentially over the past decade. Recent advancements on data acquisition technology, such as remote sensing, have allowed us to collect massive observation data of various spatial resolution and domain coverage. The volume, velocity, and variety of such spatial data, along with the computational intensive nature of spatial queries, pose grand challenge to the storage technologies for effective big data management. While high performance computing platforms (e.g., cloud computing) can be used to solve the computing-intensive issues in big data analysis, data has to be managed in a way that is suitable for distributed parallel processing. Recently, rasdaman (raster data manager) has emerged as a scalable and cost-effective database solution to store and retrieve massive multi-dimensional arrays, such as sensor, image, and statistics data. Within this paper, the pros and cons of using rasdaman to manage and query spatial raster data will be examined and compared with other common approaches, including file-based systems, relational databases (e.g., PostgreSQL/PostGIS), and NoSQL databases (e.g., MongoDB and Hive). Earth Observing System (EOS) data collected from NASA's Atmospheric Scientific Data Center (ASDC) will be used and stored in these selected database systems, and a set of spatial and non-spatial queries will be designed to benchmark their performance on retrieving large-scale, multi-dimensional arrays of EOS data. Lessons learnt from using rasdaman will be discussed as well.

  9. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    Science.gov (United States)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  10. IMAGE-BASED AIRBORNE LiDAR POINT CLOUD ENCODING FOR 3D BUILDING MODEL RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Y.-C. Chen

    2016-06-01

    Full Text Available With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show

  11. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  12. State estimation for temporal point processes

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2015-01-01

    This paper is concerned with combined inference for point processes on the real line observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point processes. For a range of models, the marginal and

  13. MARKS ON ART database

    DEFF Research Database (Denmark)

    van Vlierden, Marieke; Wadum, Jørgen; Wolters, Margreet

    2016-01-01

    Mestermærker, monogrammer og kvalitetsmærker findes ofte præget eller stemplet på kunstværker fra 1300-1700. En illustreret database med denne typer mræker er under etablering på Nederlands Kunsthistoriske Institut (RKD) i Den Haag....

  14. Relational Database and Retrieval

    African Journals Online (AJOL)

    Computer Aided Design for Soil Classification. Relational Database and Retrieval. Techniques ... also presents algorithms showing the procedure for generating various soil classifications, retrieval techniques for ... In engineering discipline, for instance, design choices are a compromise,'shaped by many competing factors.

  15. Relational database telemanagement.

    Science.gov (United States)

    Swinney, A R

    1988-05-01

    Dallas-based Baylor Health Care System recognized the need for a way to control and track responses to their marketing programs. To meet the demands of data management and analysis, and build a useful database of current customers and future prospects, the marketing department developed a system to capture, store and manage these responses.

  16. The CEBAF Element Database

    Energy Technology Data Exchange (ETDEWEB)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  17. From database to normbase

    NARCIS (Netherlands)

    Stamper, R.K.; Liu, Kecheng; Liu, K.; Kolkman, M.; Kolkman, M.; Klarenberg, P.; Ades, Y.; van Slooten, C.; van Slooten, F.; Ades, Y.

    1991-01-01

    After the database concept, we are ready for the normbase concept. The object is to decouple organizational and technical knowledge that are now mixed inextricably together in the application programs we write today. The underlying principle is to find a way of specifying a social system as a system

  18. The International Lactuca database

    NARCIS (Netherlands)

    Treuren, van R.; Menting, F.B.J.

    2014-01-01

    The International Lactuca Database includes accessions of species belonging to the genus Lactuca, but also a few accessions belonging to related genera. Passport data can be searched on-line or downloaded. Characterization and evaluation data can be accessed via the downloading section. Requests for

  19. Oversigt over databaser

    DEFF Research Database (Denmark)

    Krogh Graversen, Brian

    Dette er en oversigt over registre, som kan anvendes til at beslyse situationen og udviklingen på det sociale område. Oversigten er anden fase i et dataprojekt, som har til formål at etablere en database, som kan danne basis for en løbende overvågning, udredning, evaluering og forskning på det...

  20. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  1. Harmonization of Databases

    DEFF Research Database (Denmark)

    Charlifue, Susan; Tate, Denise; Biering-Sorensen, Fin

    2016-01-01

    The objectives of this article are to (1) provide an overview of existing spinal cord injury (SCI) clinical research databases-their purposes, characteristics, and accessibility to users; and (2) present a vision for future collaborations required for cross-cutting research in SCI. This vision hi...

  2. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  3. Geospatial database for heritage building conservation

    Science.gov (United States)

    Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.

    2014-02-01

    Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.

  4. Geospatial database for heritage building conservation

    International Nuclear Information System (INIS)

    Basir, W N F W A; Setan, H; Majid, Z; Chong, A

    2014-01-01

    Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed

  5. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPSD Database Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...max Taxonomy ID: 3847 Database description We have determined the three-dimensional structures of the protei

  6. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us GETDB Database Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...cription About 4,600 insertion lines of enhancer trap lines based on the Gal4-UAS

  7. THE INTERNET PRESENTATION OF DATABASES OF GLACIERS OF THE SOUTH OF EASTERN SIBERIA

    OpenAIRE

    A. D. Kitov; V. M. Plyusnin; E. N. Ivanov; D. A. Batuev; S. N. Kovalenko

    2017-01-01

    The authors consider the technology for creating databases of glaciers in Southern Siberia and the presentation of these databases on the Internet. The technology consists in the recognition and vectorization of spatial, multi-temporal data using GIS techniques, followed by the formation of databases that reflect the spatial and temporal variation of nival-glacial formations. The results of GIS design are presented on the website IG SB RAS and with the help of Internet service ArcGISonline on...

  8. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...atures and manner of utilization of database Protein-protein interaction data obtained by the comprehensive

  9. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  10. Firebird Database Backup by Serialized Database Table Dump

    OpenAIRE

    Ling, Maurice HT

    2007-01-01

    This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the ...

  11. Databases in the 3rd Millennium: Trends and Research Directions

    Directory of Open Access Journals (Sweden)

    Jaroslav Pokorny

    2010-04-01

    Full Text Available A database approach to data of arbitrary types has shown that the architecture of a universal database machine is restricting in a number of cases. There appear special-purpose data servers using now types of hardware devices. Data is moving toward the user, who exploits mobile devices equipped with database functionality. The goal of the paper is to present new trends in databases, particularly in their architectures and to show on cloud computing, data streams, mobile and embedded databases, and databases supporting Web.2.0 some new ideas and possibilities of solution of associated problems. The other goal of the paper is also to point out on actual problems associated with database research.

  12. Address Points - Allegheny County Address Points 201601

    Data.gov (United States)

    NSGIC Education | GIS Inventory — This dataset contains Address Points in Allegheny County. The Address Points were created by GDR for the Allegheny County CAD project, October 2008. Data is updated...

  13. The Danish Melanoma Database

    Directory of Open Access Journals (Sweden)

    Hölmich Lr

    2016-10-01

    Full Text Available Lisbet Rosenkrantz Hölmich,1 Siri Klausen,2 Eva Spaun,3 Grethe Schmidt,4 Dorte Gad,5 Inge Marie Svane,6,7 Henrik Schmidt,8 Henrik Frank Lorentzen,9 Else Helene Ibfelt10 1Department of Plastic Surgery, 2Department of Pathology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 3Institute of Pathology, Aarhus University Hospital, Aarhus, 4Department of Plastic and Reconstructive Surgery, Breast Surgery and Burns, Rigshospitalet – Glostrup, University of Copenhagen, Copenhagen, 5Department of Plastic Surgery, Odense University Hospital, Odense, 6Center for Cancer Immune Therapy, Department of Hematology, 7Department of Oncology, Herlev-Gentofte Hospital, University of Copenhagen, Herlev, 8Department of Oncology, 9Department of Dermatology, Aarhus University Hospital, Aarhus, 10Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup – Rigshospitalet, University of Copenhagen, Glostrup, Denmark Aim of database: The aim of the database is to monitor and improve the treatment and survival of melanoma patients.Study population: All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD. In 2014, 2,525 patients with invasive melanoma and 780 with in situ tumors were registered. The coverage is currently 93% compared with the Danish Pathology Register.Main variables: The main variables include demographic, clinical, and pathological characteristics, including Breslow’s tumor thickness, ± ulceration, mitoses, and tumor–node–metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a, results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence, nature, and treatment hereof is registered. In case of death, the cause and date

  14. Spatial housing economics: a survey

    OpenAIRE

    Meen, Geoff

    2016-01-01

    This introduction to the Virtual Special Issue surveys the development of spatial housing economics from its roots in neo-classical theory, through more recent developments in social interactions modelling, and touching on the role of institutions, path dependence and economic history. The survey also points to some of the more promising future directions for the subject that are beginning to appear in the literature. The survey covers elements hedonic models, spatial econometrics, neighbourh...

  15. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  16. Spatial Culture

    DEFF Research Database (Denmark)

    Reeh, Henrik

    2012-01-01

    Spatial Culture – A Humanities Perspective Abstract of introductory essay by Henrik Reeh Secured by alliances between socio-political development and cultural practices, a new field of humanistic studies in spatial culture has developed since the 1990s. To focus on links between urban culture...... and modern society is, however, an intellectual practice which has a much longer history. Already in the 1980s, the debate on the modern and the postmodern cited Paris and Los Angeles as spatio-cultural illustrations of these major philosophical concepts. Earlier, in the history of critical studies, the work...... Foucault considered a constitutive feature of 20th-century thinking and one that continues to occupy intellectual and cultural debates in the third millennium. A conceptual framework is, nevertheless, necessary, if the humanities are to adequa-tely address city and space – themes that have long been...

  17. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    Science.gov (United States)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  18. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  19. An unconventional GIS-based method to assess landslide susceptibility using point data features

    Science.gov (United States)

    Adami, S.; Bresolin, M.; Carraretto, M.; Castelletti, P.; Corò, D.; Di Mario, F.; Fiaschi, S.; Frasson, T.; Gandolfo, L.; Mazzalai, L.; Padovan, T.; Sartori, F.; Viganò, A.; Zulian, A.; De Agostini, A.; Pajola, M.; Floris, M.

    2012-04-01

    In this work are reported the results of a project performed by the students attending the course "GIS techniques in Applied Geology", in the master level of the Geological Sciences degree from the Department of Geosciences, University of Padua. The project concerns the evaluation of landslide susceptibility in the Val d'Agno basin, located in the North-Eastern Italian Alps and included in the Vicenza Province (Veneto Region, NE Italy). As well known, most of the models proposed to assess landslide susceptibility are based on the availability of spatial information on landslides and related predisposing environmental factors. Landslides and related factors are spatially combined in GIS systems to weight the influence of each predisposing factor and produce landslide susceptibility maps. The first and most important input factor is the layer landslide, which has to contain as minimum information shape and type of landslides, so it must be a polygon feature. In Italy, as well as in many countries all around the world, location and type of landslides are available in the main spatial databases (AVI project and IFFI project), but in few cases mass movements are delimited, thus they are spatially represented by point features. As an example, in the Vicenza Province, the IFFI database contains 1692 landslides stored in a point feature, but only 383 were delimited and stored in a polygon feature. In order to provide a method that allows to use all the information available and make an effective spatial prediction also in areas where mass movements are mainly stored in point features, punctual data representing landslide in the Val d'Agno basin have been buffered obtaining polygon features, which have been combined with morphometric (elevation, slope, aspect and curvature) and non-morphometric (land use, distance of roads and distance of river) factors. Two buffers have been created: the first has a radius of 10 meters, the minimum required for the analysis, and the second

  20. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  1. Database on aircraft accidents

    International Nuclear Information System (INIS)

    Nishio, Masahide; Koriyama, Tamio

    2013-11-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to this issue, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for the latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. In this report the database was revised by adding aircraft accidents in 2011 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2012 database for the latest 20 years from 1992 to 2011. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for the latest 20 years from 1992 to 2011 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2012 revised database for the latest 20 years from 1992 to 2011 shows the followings. The trend of the 2012 database changes little as compared to the last year's report. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. The number of commercial aircraft accidents is 4 for large fixed-wing aircraft, 58 for small fixed-wing aircraft, 5 for large bladed aircraft and 99 for small bladed aircraft. The relevant accidents

  2. Danish Palliative Care Database

    DEFF Research Database (Denmark)

    Grønvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims: The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population: The study population is all...... patients in Denmark referred to and/or in contact with SPC after January 1, 2010. Main variables: The main variables in DPD are data about referral for patients admitted and not admitted to SPC, type of the first SPC contact, clinical and sociodemographic factors, multidisciplinary conference...... patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion: DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness....

  3. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  4. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  5. Database on aircraft accidents

    International Nuclear Information System (INIS)

    Nishio, Masahide; Koriyama, Tamio

    2012-09-01

    The Reactor Safety Subcommittee in the Nuclear Safety and Preservation Committee published the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' as the standard method for evaluating probability of aircraft crash into nuclear reactor facilities in July 2002. In response to the report, Japan Nuclear Energy Safety Organization has been collecting open information on aircraft accidents of commercial airplanes, self-defense force (SDF) airplanes and US force airplanes every year since 2003, sorting out them and developing the database of aircraft accidents for latest 20 years to evaluate probability of aircraft crash into nuclear reactor facilities. This year, the database was revised by adding aircraft accidents in 2010 to the existing database and deleting aircraft accidents in 1991 from it, resulting in development of the revised 2011 database for latest 20 years from 1991 to 2010. Furthermore, the flight information on commercial aircrafts was also collected to develop the flight database for latest 20 years from 1991 to 2010 to evaluate probability of aircraft crash into reactor facilities. The method for developing the database of aircraft accidents to evaluate probability of aircraft crash into reactor facilities is based on the report 'The criteria on assessment of probability of aircraft crash into light water reactor facilities' described above. The 2011 revised database for latest 20 years from 1991 to 2010 shows the followings. The trend of the 2011 database changes little as compared to the last year's one. (1) The data of commercial aircraft accidents is based on 'Aircraft accident investigation reports of Japan transport safety board' of Ministry of Land, Infrastructure, Transport and Tourism. 4 large fixed-wing aircraft accidents, 58 small fixed-wing aircraft accidents, 5 large bladed aircraft accidents and 114 small bladed aircraft accidents occurred. The relevant accidents for evaluating

  6. THE EXTRAGALACTIC DISTANCE DATABASE

    International Nuclear Information System (INIS)

    Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.

    2009-01-01

    A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.

  7. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  8. Developing customer databases.

    Science.gov (United States)

    Rao, S K; Shenbaga, S

    2000-01-01

    There is a growing consensus among pharmaceutical companies that more product and customer-specific approaches to marketing and selling a new drug can result in substantial increases in sales. Marketers and researchers taking a proactive micro-marketing approach to identifying, profiling, and communicating with target customers are likely to facilitate such approaches and outcomes. This article provides a working framework for creating customer databases that can be effectively mined to achieve a variety of such marketing and sales force objectives.

  9. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  10. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  11. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  12. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  13. The CYATAXO Database

    Czech Academy of Sciences Publication Activity Database

    Komárková, Jaroslava; Nedoma, Jiří

    2006-01-01

    Roč. 6, - (2006), s. 49-54 ISSN 1213-3434 R&D Projects: GA AV ČR(CZ) IAA6005308; GA AV ČR(CZ) IBS6017004 Grant - others:EC(XE) EVK2-CT-1999-00026 Institutional research plan: CEZ:AV0Z60170517 Keywords : Database CYATAXO * cyanobacteria * taxonomy * water- blooms Subject RIV: DJ - Water Pollution ; Quality

  14. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  15. A student database

    OpenAIRE

    Kemaloğlu, Turgut

    1990-01-01

    Ankara : Department of Management and Graduate School of Business Administration, Bilkent Univ., 1990. Thesis (Master's) -- Bilkent University, 1990. Includes bibliographical refences. Tiiis tfiesia is a design of student database systeia whicii will manage the data of university students. The aim of the pi"ogram is to obtain sorted lists of students according to / several parameters,/ to obtain frequency of grades for the specified course, to design a suitable sheet w...

  16. LHCb Distributed Conditions Database

    CERN Document Server

    Clemencic, Marco

    2007-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCB library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica o...

  17. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  18. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  19. MEROPS: the peptidase database.

    Science.gov (United States)

    Rawlings, Neil D; Morton, Fraser R; Kok, Chai Yin; Kong, Jun; Barrett, Alan J

    2008-01-01

    Peptidases (proteolytic enzymes or proteases), their substrates and inhibitors are of great relevance to biology, medicine and biotechnology. The MEROPS database (http://merops.sanger.ac.uk) aims to fulfil the need for an integrated source of information about these. The organizational principle of the database is a hierarchical classification in which homologous sets of peptidases and protein inhibitors are grouped into protein species, which are grouped into families and in turn grouped into clans. Important additions to the database include newly written, concise text annotations for peptidase clans and the small molecule inhibitors that are outside the scope of the standard classification; displays to show peptidase specificity compiled from our collection of known substrate cleavages; tables of peptidase-inhibitor interactions; and dynamically generated alignments of representatives of each protein species at the family level. New ways to compare peptidase and inhibitor complements between any two organisms whose genomes have been completely sequenced, or between different strains or subspecies of the same organism, have been devised.

  20. Curcumin Resource Database

    Science.gov (United States)

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. Database URL: http://www.crdb.in PMID:26220923

  1. ARTI Refrigerant Database

    Energy Technology Data Exchange (ETDEWEB)

    Cain, J.M. [Calm (James M.), Great Falls, VA (United States)

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  2. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M. [Calm (James M.), Great Falls, VA (United States)

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  3. Interactive database management (IDM).

    Science.gov (United States)

    Othman, R

    1995-08-01

    Interactive database management (IDM) is a data editing software that provides complete data editing at the time of initial data entry when information is 'fresh at hand'. Under the new interactive system, initial data recording is subjected to instant data editing by the interactive computer software logic. Data are immediately entered in final form to the database and are available for analysis. IDM continuously checks all variables for acceptability, completeness, and consistency. IDM does not allow form duplication. Many functions including backups have been automated. The interactive system can export the database to other systems. The software has been implemented for two Department of Veterans Affairs Cooperative Studies (CCSHS #5 and CSP #385) which collect data for 1400 and 1000 variables, respectively at 28 VA medical centers. IDM is extremely user friendly and simple to operate. Researchers with no computer background can be trained quickly and easily to use the system. IDM is deployed on notebook microcomputers making it portable for use anywhere in the hospital setting.

  4. The Cambridge Structural Database.

    Science.gov (United States)

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  5. A database of worldwide glacier thickness observations

    DEFF Research Database (Denmark)

    Gärtner-Roer, I.; Naegeli, K.; Huss, M.

    2014-01-01

    surface observations. However, although thickness has been observed on many glaciers and ice caps around the globe, it has not yet been published in the shape of a readily available database. Here, we present a standardized database of glacier thickness observations compiled by an extensive literature...... review and from airborne data extracted from NASA's Operation IceBridge. This database contains ice thickness observations from roughly 1100 glaciers and ice caps including 550 glacier-wide estimates and 750,000 point observations. A comparison of these observational ice thicknesses with results from...... area- and slope-dependent approaches reveals large deviations both from the observations and between different estimation approaches. For glaciers and ice caps all estimation approaches show a tendency to overestimation. For glaciers the median relative absolute deviation lies around 30% when analyzing...

  6. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  7. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DMPD Database Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects... pathway models of transcriptional regulation and signal transduction in CSML format for dymamic simulation base

  8. Marine Trackline Geophysical Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains bathymetry, magnetic, gravity and seismic shot point navigation data collected during marine cruises from 1939 to the present. Coverage is...

  9. Do People Understand Spatial Concepts: The Case of First-Order Primitives

    OpenAIRE

    Golledge, Reginald G.

    1992-01-01

    The purpose of this paper is to examine whether people in general understand elementary spatial concepts, and to examine whether or not naive spatial knowledge includes the ability to understand important spatial primitives that are built into geographic theory, spatial databases and geographic information systems (GIS). The extent of such understanding is a partial measure of spatial ability. Accurate indicators or measures of spatial ability can be used to explain different types of spatial...

  10. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  11. Hierarchical Fuzzy Sets To Query Possibilistic Databases

    OpenAIRE

    Thomopoulos, Rallou; Buche, Patrice; Haemmerlé, Ollivier

    2008-01-01

    Within the framework of flexible querying of possibilistic databases, based on the fuzzy set theory, this chapter focuses on the case where the vocabulary used both in the querying language and in the data is hierarchically organized, which occurs in systems that use ontologies. We give an overview of previous works concerning two issues: firstly, flexible querying of imprecise data in the relational model; secondly, the introduction of fuzziness in hierarchies. Concerning the latter point, w...

  12. An XCT image database system

    International Nuclear Information System (INIS)

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  13. The Danish Fetal Medicine database

    DEFF Research Database (Denmark)

    Ekelund, Charlotte; Kopp, Tine Iskov; Tabor, Ann

    2016-01-01

    trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units’Astraia databases to the central database via...... analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database...

  14. Navigation in medical Internet image databases.

    Science.gov (United States)

    Frankewitsch, T; Prokosch, U

    2001-01-01

    The world wide web (WWW) changes common ideas of database access. Hypertext Markup Language allows the simultaneous presentation of information from different sources such as static pages, results of queries from) databases or dynamically generated pages. 'Therefore, the metaphor of the WWW itself as a database was proposed by Mendelzon and Nlilo in 1998. Against this background the techniques of navigation within WWW-databases and the semantic types of their queries has e been analysed. Forty eight image repositories of different types and content, but all concerning medical essence, have been found by search-engines. Many different techniques are offered to enable navigation ranging from simple HTML-link-lists to complex applets. The applets in particular promise an improvement for navigation. Within the meta-information for querying, only ACR- and UMLS-encoding were found, but not standardized vocabularies like ICD10 or Terminologia Anatomica. UMLS especially shows that a well defined thesaurus can improve navigation. However, of the analysed databases only the UMLS 'metathesaurus' is currently implemented without providing additional navigation support based on the UMLS 'semantic network'. Including the information about relationships between the concepts of the metathesaurus or using the UMLS semantic network could provide a much easier navigation within a network of concepts pointing to multimedia files stored somewhere in the WWW.

  15. Dansk kolorektal Cancer Database

    DEFF Research Database (Denmark)

    Harling, Henrik; Nickelsen, Thomas

    2005-01-01

    The Danish Colorectal Cancer Database was established in 1994 with the purpose of monitoring whether diagnostic and surgical principles specified in the evidence-based national guidelines of good clinical practice were followed. Twelve clinical indicators have been listed by the Danish Colorectal...... Cancer Group, and the performance of each hospital surgical department with respect to these indicators is reported annually. In addition, the register contains a large collection of data that provide valuable information on the influence of comorbidity and lifestyle factors on disease outcome...

  16. Usability in Scientific Databases

    Directory of Open Access Journals (Sweden)

    Ana-Maria Suduc

    2012-07-01

    Full Text Available Usability, most often defined as the ease of use and acceptability of a system, affects the users' performance and their job satisfaction when working with a machine. Therefore, usability is a very important aspect which must be considered in the process of a system development. The paper presents several numerical data related to the history of the scientific research of the usability of information systems, as it is viewed in the information provided by three important scientific databases, Science Direct, ACM Digital Library and IEEE Xplore Digital Library, at different queries related to this field.

  17. EMU Lessons Learned Database

    Science.gov (United States)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  18. Social Capital Database

    DEFF Research Database (Denmark)

    Paldam, Martin; Svendsen, Gert Tinggaard

    2005-01-01

      This report has two purposes: The first purpose is to present our 4-page question­naire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... is to present the social capital database we have collected for 21 countries using the question­naire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....

  19. Opportunities and challenges of using diagnostic databases for monitoring livestock diseases in Denmark

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Hisham Beshara Halasa, Tariq; Toft, Nils

    Husbandry Register (CHR), Meat inspection database for cattle and swine, mortality database and movement database. These databases are owned by the Ministry of Food, Agriculture and Fisheries. Other databases, such as the Danish Cattle Database, are owned by the agricultural sector. In addition......, and by comparing the predictions of models with previous diseases events in Denmark. A further challenge is to identify the most adequate surveillance timescale (i.e. daily, weekly or monthly basis) as well as suitable spatial distances, in order to identify outlier events when the features of the alarm (e...

  20. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available pliance with the terms and conditions of the license described below. The license s...List Contact us Trypanosomes Database License License to Use This Database Last updated : 2014/02/04 You may use this database in com

  1. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  2. The Spatial Politics of Spatial Representation

    DEFF Research Database (Denmark)

    Olesen, Kristian; Richardson, Tim

    2011-01-01

    This paper explores the interplay between the spatial politics of new governance landscapes and innovations in the use of spatial representations in planning. The central premise is that planning experiments with new relational approaches become enmeshed in spatial politics. The case of strategic...... spatial planning in Denmark reveals how fuzzy spatial representations and relational spatial concepts are being used to depoliticise strategic spatial planning processes and to camouflage spatial politics. The paper concludes that, while relational geography might play an important role in building...

  3. Curcumin Resource Database.

    Science.gov (United States)

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. © The Author(s) 2015. Published by Oxford University Press.

  4. Asbestos Exposure Assessment Database

    Science.gov (United States)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  5. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  6. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model,

  7. The USAID Environmental Compliance Database

    Data.gov (United States)

    US Agency for International Development — The Environmental Compliance Database is a record of environmental compliance submissions with their outcomes. Documents in the database can be found by visiting the...

  8. Mobile Source Observation Database (MSOD)

    Science.gov (United States)

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  9. Mobile Source Observation Database (MSOD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental...

  10. Household Products Database: Personal Care

    Science.gov (United States)

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2017 by DeLima Associates. All rights reserved. ...

  11. Firebird Database Backup by Serialized Database Table Dump

    Directory of Open Access Journals (Sweden)

    2008-06-01

    Full Text Available This paper presents a simple data dump and load utility for Firebird databases which mimics mysqldump in MySQL. This utility, fb_dump and fb_load, for dumping and loading respectively, retrieves each database table using kinterbasdb and serializes the data using marshal module. This utility has two advantages over the standard Firebird database backup utility, gbak. Firstly, it is able to backup and restore single database tables which might help to recover corrupted databases. Secondly, the output is in text-coded format (from marshal module making it more resilient than a compressed text backup, as in the case of using gbak.

  12. Hydrogen Leak Detection Sensor Database

    Science.gov (United States)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  13. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  14. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    Akishina, E.P.; Aleksandrov, E.I.; Aleksandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Zrelov, P.V.; Friese, V.; Mueller, W.

    2014-01-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  15. Database security in the cloud

    OpenAIRE

    Sakhi, Imal

    2012-01-01

    The aim of the thesis is to get an overview of the database services available in cloud computing environment, investigate the security risks associated with it and propose the possible countermeasures to minimize the risks. The thesis also analyzes two cloud database service providers namely; Amazon RDS and Xeround. The reason behind choosing these two providers is because they are currently amongst the leading cloud database providers and both provide relational cloud databases which makes ...

  16. Applying spatial thinking in social science research.

    Science.gov (United States)

    Logan, John R; Zhang, Weiwei; Xu, Hongwei

    2010-01-01

    Spatial methods that build upon Geographic Information Systems are spreading quickly across the social sciences. This essay points out that the appropriate use of spatial tools requires more careful thinking about spatial concepts. As easy as it is now to measure distance, it is increasingly important to understand what we think it represents. To interpret spatial patterns, we need spatial theories. We review here a number of key concepts as well as some of the methodological approaches that are now at the disposal of researchers, and illustrate them with studies that reflect the very wide range of problems that use these tools.

  17. A Spatio-Temporal Enhanced Metadata Model for Interdisciplinary Instant Point Observations in Smart Cities

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-02-01

    Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.

  18. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2015-08-01

    Full Text Available Three-dimensional (3D point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.

  19. National Geochronological Database

    Science.gov (United States)

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  20. Fingerprint Analysis with Marked Point Processes

    DEFF Research Database (Denmark)

    Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper

    We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....