WorldWideScience

Sample records for estimating multi-temporal database

  1. Rice yield estimation with multi-temporal Radarsat-2 data

    Science.gov (United States)

    Chen, Chi-Farn; Son, Nguyen-Thanh; Chen, Cheng-Ru

    2015-04-01

    Rice is the most important food crop in Taiwan. Monitoring rice crop yield is thus crucial for agronomic planners to formulate successful strategies to address national food security and rice grain export issues. However, there is a real challenge for this monitoring purpose because the size of rice fields in Taiwan was generally small and fragmented, and the cropping calendar was also different from region to region. Thus, satellite-based estimation of rice crop yield requires the data that have sufficient spatial and temporal resolutions. This study aimed to develop models to estimate rice crop yield from multi-temporal Radarsat-2 data (5 m resolution). Data processing were carried out for the first rice cropping season from February to July in 2014 in the western part of Taiwan, consisting of four main steps: (1) constructing time-series backscattering coefficient data, (2) spatiotemporal noise filtering of the time-series data, (3) establishment of crop yield models using the time-series backscattering coefficients and in-situ measured yield data, and (4) model validation using field data and government's yield statistics. The results indicated that backscattering behavior varied from region to region due to changes in cultural practices and cropping calendars. The highest correlation coefficient (R2 > 0.8) was obtained at the ripening period. The robustness of the established models was evaluated by comparisons between the estimated yields and in-situ measured yield data showed satisfactory results, with the root mean squared error (RMSE) smaller than 10%. Such results were reaffirmed by the correlation analysis between the estimated yields and government's rice yield statistics (R2 > 0.8). This study demonstrates advantages of using multi-temporal Radarsat-2 backscattering data for estimating rice crop yields in Taiwan prior to the harvesting period, and thus the methods were proposed for rice yield monitoring in other regions.

  2. Robust Automated Image Co-Registration of Optical Multi-Sensor Time Series Data: Database Generation for Multi-Temporal Landslide Detection

    Directory of Open Access Journals (Sweden)

    Robert Behling

    2014-03-01

    Full Text Available Reliable multi-temporal landslide detection over longer periods of time requires multi-sensor time series data characterized by high internal geometric stability, as well as high relative and absolute accuracy. For this purpose, a new methodology for fully automated co-registration has been developed allowing efficient and robust spatial alignment of standard orthorectified data products originating from a multitude of optical satellite remote sensing data of varying spatial resolution. Correlation-based co-registration uses world-wide available terrain corrected Landsat Level 1T time series data as the spatial reference, ensuring global applicability. The developed approach has been applied to a multi-sensor time series of 592 remote sensing datasets covering an approximately 12,000 km2 area in Southern Kyrgyzstan (Central Asia strongly affected by landslides. The database contains images acquired during the last 26 years by Landsat (ETM, ASTER, SPOT and RapidEye sensors. Analysis of the spatial shifts obtained from co-registration has revealed sensor-specific alignments ranging between 5 m and more than 400 m. Overall accuracy assessment of these alignments has resulted in a high relative image-to-image accuracy of 17 m (RMSE and a high absolute accuracy of 23 m (RMSE for the whole co-registered database, making it suitable for multi-temporal landslide detection at a regional scale in Southern Kyrgyzstan.

  3. Join Operations in Temporal Databases

    DEFF Research Database (Denmark)

    Gao, D.; Jensen, Christian Søndergaard; Snodgrass, R.T.

    2005-01-01

    Joins are arguably the most important relational operators. Poor implementations are tantamount to computing the Cartesian product of the input relations. In a temporal database, the problem is more acute for two reasons. First, conventional techniques are designed for the evaluation of joins...... with equality predicates rather than the inequality predicates prevalent in valid-time queries. Second, the presence of temporally varying data dramatically increases the size of a database. These factors indicate that specialized techniques are needed to efficiently evaluate temporal joins. We address...... this need for efficient join evaluation in temporal databases. Our purpose is twofold. We first survey all previously proposed temporal join operators. While many temporal join operators have been defined in previous work, this work has been done largely in isolation from competing proposals, with little...

  4. MULTI-TEMPORAL ANALYSIS OF LANDSCAPES AND URBAN AREAS

    Directory of Open Access Journals (Sweden)

    E. Nocerino

    2012-07-01

    Full Text Available This article presents a 4D modelling approach that employs multi-temporal and historical aerial images to derive spatio-temporal information for scenes and landscapes. Such imagery represent a unique data source, which combined with photo interpretation and reality-based 3D reconstruction techniques, can offer a more complete modelling procedure because it adds the fourth dimension of time to 3D geometrical representation and thus, allows urban planners, historians, and others to identify, describe, and analyse changes in individual scenes and buildings as well as across landscapes. Particularly important to this approach are historical aerial photos, which provide data about the past that can be collected, processed, and then integrated as a database. The proposed methodology employs both historical (1945 and more recent (1973 and 2000s aerial images from the Trentino region in North-eastern Italy in order to create a multi-temporal database of information to assist researchers in many disciplines such as topographic mapping, geology, geography, architecture, and archaeology as they work to reconstruct building phases and to understand landscape transformations (Fig. 1.

  5. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    Science.gov (United States)

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  6. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    Science.gov (United States)

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated

  7. Multimodality medical image database for temporal lobe epilepsy

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-05-01

    This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  8. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  9. Research on spatio-temporal database techniques for spatial information service

    Science.gov (United States)

    Zhao, Rong; Wang, Liang; Li, Yuxiang; Fan, Rongshuang; Liu, Ping; Li, Qingyuan

    2007-06-01

    Geographic data should be described by spatial, temporal and attribute components, but the spatio-temporal queries are difficult to be answered within current GIS. This paper describes research into the development and application of spatio-temporal data management system based upon GeoWindows GIS software platform which was developed by Chinese Academy of Surveying and Mapping (CASM). Faced the current and practical requirements of spatial information application, and based on existing GIS platform, one kind of spatio-temporal data model which integrates vector and grid data together was established firstly. Secondly, we solved out the key technique of building temporal data topology, successfully developed a suit of spatio-temporal database management system adopting object-oriented methods. The system provides the temporal data collection, data storage, data management and data display and query functions. Finally, as a case study, we explored the application of spatio-temporal data management system with the administrative region data of multi-history periods of China as the basic data. With all the efforts above, the GIS capacity of management and manipulation in aspect of time and attribute of GIS has been enhanced, and technical reference has been provided for the further development of temporal geographic information system (TGIS).

  10. Age and gender estimation using Region-SIFT and multi-layered SVM

    Science.gov (United States)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun

    2018-04-01

    In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.

  11. Using Urban Landscape Trajectories to Develop a Multi-Temporal Land Cover Database to Support Ecological Modeling

    Directory of Open Access Journals (Sweden)

    Marina Alberti

    2009-12-01

    Full Text Available Urbanization and the resulting changes in land cover have myriad impacts on ecological systems. Monitoring these changes across large spatial extents and long time spans requires synoptic remotely sensed data with an appropriate temporal sequence. We developed a multi-temporal land cover dataset for a six-county area surrounding the Seattle, Washington State, USA, metropolitan region. Land cover maps for 1986, 1991, 1995, 1999, and 2002 were developed from Landsat TM images through a combination of spectral unmixing, image segmentation, multi-season imagery, and supervised classification approaches to differentiate an initial nine land cover classes. We then used ancillary GIS layers and temporal information to define trajectories of land cover change through multiple updating and backdating rules and refined our land cover classification for each date into 14 classes. We compared the accuracy of the initial approach with the landscape trajectory modifications and determined that the use of landscape trajectory rules increased our ability to differentiate several classes including bare soil (separated into cleared for development, agriculture, and clearcut forest and three intensities of urban. Using the temporal dataset, we found that between 1986 and 2002, urban land cover increased from 8 to 18% of our study area, while lowland deciduous and mixed forests decreased from 21 to 14%, and grass and agriculture decreased from 11 to 8%. The intensity of urban land cover increased with 252 km2 in Heavy Urban in 1986 increasing to 629 km2 by 2002. The ecological systems that are present in this region were likely significantly altered by these changes in land cover. Our results suggest that multi-temporal (i.e., multiple years and multiple seasons within years Landsat data are an economical means to quantify land cover and land cover change across large and highly heterogeneous urbanizing landscapes. Our data, and similar temporal land cover change

  12. Spatio-temporal databases complex motion pattern queries

    CERN Document Server

    Vieira, Marcos R

    2013-01-01

    This brief presents several new query processing techniques, called complex motion pattern queries, specifically designed for very large spatio-temporal databases of moving objects. The brief begins with the definition of flexible pattern queries, which are powerful because of the integration of variables and motion patterns. This is followed by a summary of the expressive power of patterns and flexibility of pattern queries. The brief then present the Spatio-Temporal Pattern System (STPS) and density-based pattern queries. STPS databases contain millions of records with information about mobi

  13. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  14. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  15. MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH

    Data.gov (United States)

    National Aeronautics and Space Administration — MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Multispectral remote sensing images have...

  16. MULTI-TEMPORAL SAR INTERFEROMETRY FOR LANDSLIDE MONITORING

    Directory of Open Access Journals (Sweden)

    R. Dwivedi

    2016-06-01

    Full Text Available In the past few years, SAR Interferometry specially InSAR and D-InSAR were extensively used for deformation monitoring related applications. Due to temporal and spatial decorrelation in dense vegetated areas, effectiveness of InSAR and D-InSAR observations were always under scrutiny. Multi-temporal InSAR methods are developed in recent times to retrieve the deformation signal from pixels with different scattering characteristics. Presently, two classes of multi-temporal InSAR algorithms are available- Persistent Scatterer (PS and Small Baseline (SB methods. This paper discusses the Stanford Method for Persistent Scatterer (StaMPS based PS-InSAR and the Small Baselines Subset (SBAS techniques to estimate the surface deformation in Tehri dam reservoir region in Uttarkhand, India. Both PS-InSAR and SBAS approaches used sixteen ENVISAT ASAR C-Band images for generating single master and multiple master interferograms stack respectively and their StaMPS processing resulted in time series 1D-Line of Sight (LOS mean velocity maps which are indicative of deformation in terms of movement towards and away from the satellites. From 1D LOS velocity maps, localization of landslide is evident along the reservoir rim area which was also investigated in the previous studies. Both PS-InSAR and SBAS effectively extract measurement pixels in the study region, and the general results provided by both approaches show a similar deformation pattern along the Tehri reservoir region. Further, we conclude that StaMPS based PS-InSAR method performs better in terms of extracting more number of measurement pixels and in the estimation of mean Line of Sight (LOS velocity as compared to SBAS method. It is also proposed to take up a few major landslides area in Uttarakhand for slope stability assessment.

  17. Design of multi-tiered database application based on CORBA component

    International Nuclear Information System (INIS)

    Sun Xiaoying; Dai Zhimin

    2003-01-01

    As computer technology quickly developing, middleware technology changed traditional two-tier database system. The multi-tiered database system, consisting of client application program, application servers and database serves, is mainly applying. While building multi-tiered database system using CORBA component has become the mainstream technique. In this paper, an example of DUV-FEL database system is presented, and then discuss the realization of multi-tiered database based on CORBA component. (authors)

  18. Initial spatio-temporal domain expansion of the Modelfest database

    Science.gov (United States)

    Carney, Thom; Mozaffari, Sahar; Sun, Sean; Johnson, Ryan; Shirvastava, Sharona; Shen, Priscilla; Ly, Emma

    2013-03-01

    The first Modelfest group publication appeared in the SPIE Human Vision and Electronic Imaging conference proceedings in 1999. "One of the group's goals is to develop a public database of test images with threshold data from multiple laboratories for designing and testing HVS (Human Vision Models)." After extended discussions the group selected a set of 45 static images thought to best meet that goal and collected psychophysical detection data which is available on the WEB and presented in the 2000 SPIE conference proceedings. Several groups have used these datasets to test spatial modeling ideas. Further discussions led to the preliminary stimulus specification for extending the database into the temporal domain which was published in the 2002 conference proceeding. After a hiatus of 12 years, some of us have collected spatio-temporal thresholds on an expanded stimulus set of 41 video clips; the original specification included 35 clips. The principal change involved adding one additional spatial pattern beyond the three originally specified. The stimuli consisted of 4 spatial patterns, Gaussian Blob, 4 c/d Gabor patch, 11.3 c/d Gabor patch and a 2D white noise patch. Across conditions the patterns were temporally modulated over a range of approximately 0-25 Hz as well as temporal edge and pulse modulation conditions. The display and data collection specifications were as specified by the Modelfest groups in the 2002 conference proceedings. To date seven subjects have participated in this phase of the data collection effort, one of which also participated in the first phase of Modelfest. Three of the spatio-temporal stimuli were identical to conditions in the original static dataset. Small differences in the thresholds were evident and may point to a stimulus limitation. The temporal CSF peaked between 4 and 8 Hz for the 0 c/d (Gaussian blob) and 4 c/d patterns. The 4 c/d and 11.3 c/d Gabor temporal CSF was low pass while the 0 c/d pattern was band pass. This

  19. Querying temporal databases via OWL 2 QL

    CSIR Research Space (South Africa)

    Klarman, S

    2014-06-01

    Full Text Available SQL:2011, the most recently adopted version of the SQL query language, has unprecedentedly standardized the representation of temporal data in relational databases. Following the successful paradigm of ontology-based data access, we develop a...

  20. The trend of the multi-scale temporal variability of precipitation in Colorado River Basin

    Science.gov (United States)

    Jiang, P.; Yu, Z.

    2011-12-01

    Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.

  1. A comparison of multi-spectral, multi-angular, and multi-temporal remote sensing datasets for fractional shrub canopy mapping in Arctic Alaska

    Science.gov (United States)

    Selkowitz, D.J.

    2010-01-01

    Shrub cover appears to be increasing across many areas of the Arctic tundra biome, and increasing shrub cover in the Arctic has the potential to significantly impact global carbon budgets and the global climate system. For most of the Arctic, however, there is no existing baseline inventory of shrub canopy cover, as existing maps of Arctic vegetation provide little information about the density of shrub cover at a moderate spatial resolution across the region. Remotely-sensed fractional shrub canopy maps can provide this necessary baseline inventory of shrub cover. In this study, we compare the accuracy of fractional shrub canopy (> 0.5 m tall) maps derived from multi-spectral, multi-angular, and multi-temporal datasets from Landsat imagery at 30 m spatial resolution, Moderate Resolution Imaging SpectroRadiometer (MODIS) imagery at 250 m and 500 m spatial resolution, and MultiAngle Imaging Spectroradiometer (MISR) imagery at 275 m spatial resolution for a 1067 km2 study area in Arctic Alaska. The study area is centered at 69 ??N, ranges in elevation from 130 to 770 m, is composed primarily of rolling topography with gentle slopes less than 10??, and is free of glaciers and perennial snow cover. Shrubs > 0.5 m in height cover 2.9% of the study area and are primarily confined to patches associated with specific landscape features. Reference fractional shrub canopy is determined from in situ shrub canopy measurements and a high spatial resolution IKONOS image swath. Regression tree models are constructed to estimate fractional canopy cover at 250 m using different combinations of input data from Landsat, MODIS, and MISR. Results indicate that multi-spectral data provide substantially more accurate estimates of fractional shrub canopy cover than multi-angular or multi-temporal data. Higher spatial resolution datasets also provide more accurate estimates of fractional shrub canopy cover (aggregated to moderate spatial resolutions) than lower spatial resolution datasets

  2. Interactive Multi-Instrument Database of Solar Flares

    Science.gov (United States)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  3. MIMIC II: a massive temporal ICU patient database to support research in intelligent patient monitoring

    Science.gov (United States)

    Saeed, M.; Lieu, C.; Raber, G.; Mark, R. G.

    2002-01-01

    Development and evaluation of Intensive Care Unit (ICU) decision-support systems would be greatly facilitated by the availability of a large-scale ICU patient database. Following our previous efforts with the MIMIC (Multi-parameter Intelligent Monitoring for Intensive Care) Database, we have leveraged advances in networking and storage technologies to develop a far more massive temporal database, MIMIC II. MIMIC II is an ongoing effort: data is continuously and prospectively archived from all ICU patients in our hospital. MIMIC II now consists of over 800 ICU patient records including over 120 gigabytes of data and is growing. A customized archiving system was used to store continuously up to four waveforms and 30 different parameters from ICU patient monitors. An integrated user-friendly relational database was developed for browsing of patients' clinical information (lab results, fluid balance, medications, nurses' progress notes). Based upon its unprecedented size and scope, MIMIC II will prove to be an important resource for intelligent patient monitoring research, and will support efforts in medical data mining and knowledge-discovery.

  4. Mining approximate temporal functional dependencies with pure temporal grouping in clinical databases.

    Science.gov (United States)

    Combi, Carlo; Mantovani, Matteo; Sabaini, Alberto; Sala, Pietro; Amaddeo, Francesco; Moretti, Ugo; Pozzi, Giuseppe

    2015-07-01

    Functional dependencies (FDs) typically represent associations over facts stored by a database, such as "patients with the same symptom get the same therapy." In more recent years, some extensions have been introduced to represent both temporal constraints (temporal functional dependencies - TFDs), as "for any given month, patients with the same symptom must have the same therapy, but their therapy may change from one month to the next one," and approximate properties (approximate functional dependencies - AFDs), as "patients with the same symptomgenerallyhave the same therapy." An AFD holds most of the facts stored by the database, enabling some data to deviate from the defined property: the percentage of data which violate the given property is user-defined. According to this scenario, in this paper we introduce approximate temporal functional dependencies (ATFDs) and use them to mine clinical data. Specifically, we considered the need for deriving new knowledge from psychiatric and pharmacovigilance data. ATFDs may be defined and measured either on temporal granules (e.g.grouping data by day, week, month, year) or on sliding windows (e.g.a fixed-length time interval which moves over the time axis): in this regard, we propose and discuss some specific and efficient data mining techniques for ATFDs. We also developed two running prototypes and showed the feasibility of our proposal by mining two real-world clinical data sets. The clinical interest of the dependencies derived considering the psychiatry and pharmacovigilance domains confirms the soundness and the usefulness of the proposed techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Geodetic Control Points - Multi-State Control Point Database

    Data.gov (United States)

    NSGIC State | GIS Inventory — The Multi-State Control Point Database (MCPD) is a database of geodetic and mapping control covering Idaho and Montana. The control were submitted by registered land...

  6. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  7. A Multi-Temporal Remote Sensing Approach to Freshwater Turtle Conservation

    Science.gov (United States)

    Mui, Amy B.

    Freshwater turtles are a globally declining taxa, and estimates of population status are not available for many species. Primary causes of decline stem from widespread habitat loss and degradation, and obtaining spatially-explicit information on remaining habitat across a relevant spatial scale has proven challenging. The discipline of remote sensing science has been employed widely in studies of biodiversity conservation, but it has not been utilized as frequently for cryptic, and less vagile species such as turtles, despite their vulnerable status. The work presented in this thesis investigates how multi-temporal remote sensing imagery can contribute key information for building spatially-explicit and temporally dynamic models of habitat and connectivity for the threatened, Blanding's turtle (Emydoidea blandingii) in southern Ontario, Canada. I began with outlining a methodological approach for delineating freshwater wetlands from high spatial resolution remote sensing imagery, using a geographic object-based image analysis (GEOBIA) approach. This method was applied to three different landscapes in southern Ontario, and across two biologically relevant seasons during the active (non-hibernating) period of Blanding's turtles. Next, relevant environmental variables associated with turtle presence were extracted from remote sensing imagery, and a boosted regression tree model was developed to predict the probability of occurrence of this species. Finally, I analysed the movement potential for Blanding's turtles in a disturbed landscape using a combination of approaches. Results indicate that (1) a parsimonious GEOBIA approach to land cover mapping, incorporating texture, spectral indices, and topographic information can map heterogeneous land cover with high accuracy, (2) remote-sensing derived environmental variables can be used to build habitat models with strong predictive power, and (3) connectivity potential is best estimated using a variety of approaches

  8. Multi-temporal Land Use Mapping of Coastal Wetlands Area using Machine Learning in Google Earth Engine

    Science.gov (United States)

    Farda, N. M.

    2017-12-01

    Coastal wetlands provide ecosystem services essential to people and the environment. Changes in coastal wetlands, especially on land use, are important to monitor by utilizing multi-temporal imagery. The Google Earth Engine (GEE) provides many machine learning algorithms (10 algorithms) that are very useful for extracting land use from imagery. The research objective is to explore machine learning in Google Earth Engine and its accuracy for multi-temporal land use mapping of coastal wetland area. Landsat 3 MSS (1978), Landsat 5 TM (1991), Landsat 7 ETM+ (2001), and Landsat 8 OLI (2014) images located in Segara Anakan lagoon are selected to represent multi temporal images. The input for machine learning are visible and near infrared bands, PCA band, invers PCA bands, bare soil index, vegetation index, wetness index, elevation from ASTER GDEM, and GLCM (Harralick) texture, and also polygon samples in 140 locations. There are 10 machine learning algorithms applied to extract coastal wetlands land use from Landsat imagery. The algorithms are Fast Naive Bayes, CART (Classification and Regression Tree), Random Forests, GMO Max Entropy, Perceptron (Multi Class Perceptron), Winnow, Voting SVM, Margin SVM, Pegasos (Primal Estimated sub-GrAdient SOlver for Svm), IKPamir (Intersection Kernel Passive Aggressive Method for Information Retrieval, SVM). Machine learning in Google Earth Engine are very helpful in multi-temporal land use mapping, the highest accuracy for land use mapping of coastal wetland is CART with 96.98 % Overall Accuracy using K-Fold Cross Validation (K = 10). GEE is particularly useful for multi-temporal land use mapping with ready used image and classification algorithms, and also very challenging for other applications.

  9. Review of multi-physics temporal coupling methods for analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Zerkak, Omar; Kozlowski, Tomasz; Gajev, Ivan

    2015-01-01

    Highlights: • Review of the numerical methods used for the multi-physics temporal coupling. • Review of high-order improvements to the Operator Splitting coupling method. • Analysis of truncation error due to the temporal coupling. • Recommendations on best-practice approaches for multi-physics temporal coupling. - Abstract: The advanced numerical simulation of a realistic physical system typically involves multi-physics problem. For example, analysis of a LWR core involves the intricate simulation of neutron production and transport, heat transfer throughout the structures of the system and the flowing, possibly two-phase, coolant. Such analysis involves the dynamic coupling of multiple simulation codes, each one devoted to the solving of one of the coupled physics. Multiple temporal coupling methods exist, yet the accuracy of such coupling is generally driven by the least accurate numerical scheme. The goal of this paper is to review in detail the approaches and numerical methods that can be used for the multi-physics temporal coupling, including a comprehensive discussion of the issues associated with the temporal coupling, and define approaches that can be used to perform multi-physics analysis. The paper is not limited to any particular multi-physics process or situation, but is intended to provide a generic description of multi-physics temporal coupling schemes for any development stage of the individual (single-physics) tools and methods. This includes a wide spectrum of situation, where the individual (single-physics) solvers are based on pre-existing computation codes embedded as individual components, or a new development where the temporal coupling can be developed and implemented as a part of code development. The discussed coupling methods are demonstrated in the framework of LWR core analysis

  10. SHYREG, a national database of flood frequency estimation

    Directory of Open Access Journals (Sweden)

    Arnaud Patrick

    2016-01-01

    Full Text Available SHYREG method is a regionalized method for rainfall and flood frequency analysis (FFA. It is based on processes simulation. It couples an hourly rainfall generator with a rainfall-runoff model, simplified enough to be regionalized. The method has been calibrated using all hydro meteorological data available at the national level. In France, that represents about 2800 raingauges of the French Weather Service network and about 1800 stations of the hydrometric National Bank network. Then, the method has been regionalized to provide a rainfall and flow quantiles database. An evaluation of the method was carried out during different thesis works and more recently during the ANR project Extraflo, with the aim of comparing different FFA approaches. The accuracy of the method in estimating rainfall and flow quantiles has been proved, as well as its stability due to a parameterization based on average values. The link with rainfall seems preferable to extrapolation based solely on the flow. Thus, another interest of the method is to take into account extreme flood behaviour with help of rainfall frequency estimation. In addition, the approach is implicitly multi-durational, and only one regionalization meets all the needs in terms hydrological hazards characterisation. For engineering needs and to avoid repeating the method implementation, this method has been applied throughout a 50 meters resolution mesh to provide a complete flood quantiles database over the French territory providing regional information on hydrological hazards. However, it is subject to restrictions related to the nature of the method: the SHYREG flows are “natural”, and do not take into account specific cases like the basins highly influenced by presence of hydraulic works, flood expansion areas, high snowmelt or karsts. Information about these restrictions and uncertainty estimation is provided with this database, which can be consulted via web access.

  11. Multi-temporal and Dual-polarization Interferometric SAR for Land Cover Type Classification

    Directory of Open Access Journals (Sweden)

    WANG Xinshuang

    2015-05-01

    Full Text Available In order to study SAR land cover classification method, this paper uses the multi-dimensional combination of temporal,polarization and InSAR data. The area covered by space borne data of ALOS PALSAR in Xunke County,Heilongjiang Province was chosen as test site. A land cover classification technique of SVM based on multi-temporal, multi-polarization and InSAR data had been proposed, using the sensitivity to land cover type of multi-temporal, multi-polarization SAR data and InSAR measurements, and combing time series characteristic of backscatter coefficient and correlation coefficient to identify ground objects. The results showed the problem of confusion between forest land and urban construction land can be nicely solved, using the correlation coefficient between HH and HV, and also combing the selected temporal, polarization and InSAR characteristics. The land cover classification result with higher accuracy is gotten using the classification algorithm proposed in this paper.

  12. Multi-dimensional database design and implementation of dam safety monitoring system

    Directory of Open Access Journals (Sweden)

    Zhao Erfeng

    2008-09-01

    Full Text Available To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design was achieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.

  13. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    Science.gov (United States)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  14. VISUALIZATION OF SPATIO-TEMPORAL RELATIONS IN MOVEMENT EVENT USING MULTI-VIEW

    Directory of Open Access Journals (Sweden)

    K. Zheng

    2017-09-01

    Full Text Available Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  15. Spatial-temporal data model and fractal analysis of transportation network in GIS environment

    Science.gov (United States)

    Feng, Yongjiu; Tong, Xiaohua; Li, Yangdong

    2008-10-01

    How to organize transportation data characterized by multi-time, multi-scale, multi-resolution and multi-source is one of the fundamental problems of GIS-T development. A spatial-temporal data model for GIS-T is proposed based on Spatial-temporal- Object Model. Transportation network data is systemically managed using dynamic segmentation technologies. And then a spatial-temporal database is built to integrally store geographical data of multi-time for transportation. Based on the spatial-temporal database, functions of spatial analysis of GIS-T are substantively extended. Fractal module is developed to improve the analyzing in intensity, density, structure and connectivity of transportation network based on the validation and evaluation of topologic relation. Integrated fractal with GIS-T strengthens the functions of spatial analysis and enriches the approaches of data mining and knowledge discovery of transportation network. Finally, the feasibility of the model and methods are tested thorough Guangdong Geographical Information Platform for Highway Project.

  16. Multiscale spatial and temporal estimation of the b-value

    Science.gov (United States)

    García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.

    2017-12-01

    The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.

  17. Multi-material 3D Models for Temporal Bone Surgical Simulation.

    Science.gov (United States)

    Rose, Austin S; Kimbell, Julia S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Buchman, Craig A

    2015-07-01

    A simulated, multicolor, multi-material temporal bone model can be created using 3-dimensional (3D) printing that will prove both safe and beneficial in training for actual temporal bone surgical cases. As the process of additive manufacturing, or 3D printing, has become more practical and affordable, a number of applications for the technology in the field of Otolaryngology-Head and Neck Surgery have been considered. One area of promise is temporal bone surgical simulation. Three-dimensional representations of human temporal bones were created from temporal bone computed tomography (CT) scans using biomedical image processing software. Multi-material models were then printed and dissected in a temporal bone laboratory by attending and resident otolaryngologists. A 5-point Likert scale was used to grade the models for their anatomical accuracy and suitability as a simulation of cadaveric and operative temporal bone drilling. The models produced for this study demonstrate significant anatomic detail and a likeness to human cadaver specimens for drilling and dissection. Simulated temporal bones created by this process have potential benefit in surgical training, preoperative simulation for challenging otologic cases, and the standardized testing of temporal bone surgical skills. © The Author(s) 2015.

  18. A spatio-temporal landslide inventory for the NW of Spain: BAPA database

    Science.gov (United States)

    Valenzuela, Pablo; Domínguez-Cuesta, María José; Mora García, Manuel Antonio; Jiménez-Sánchez, Montserrat

    2017-09-01

    A landslide database has been created for the Principality of Asturias, NW Spain: the BAPA (Base de datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database). Data collection is mainly performed through searching local newspaper archives. Moreover, a BAPA App and a BAPA website (http://geol.uniovi.es/BAPA) have been developed to obtain additional information from citizens and institutions. Presently, the dataset covers the period 1980-2015, recording 2063 individual landslides. The use of free cartographic servers, such as Google Maps, Google Street View and Iberpix (Government of Spain), combined with the spatial descriptions and pictures contained in the press news, makes it possible to assess different levels of spatial accuracy. In the database, 59% of the records show an exact spatial location, and 51% of the records provided accurate dates, showing the usefulness of press archives as temporal records. Thus, 32% of the landslides show the highest spatial and temporal accuracy levels. The database also gathers information about the type and characteristics of the landslides, the triggering factors and the damage and costs caused. Field work was conducted to validate the methodology used in assessing the spatial location, temporal occurrence and characteristics of the landslides.

  19. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Science.gov (United States)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  20. Infrastructure assessment for disaster management using multi-sensor and multi-temporal remote sensing imagery

    DEFF Research Database (Denmark)

    Butenuth, Matthias; Frey, Daniel; Nielsen, Allan Aasbjerg

    2011-01-01

    In this paper, a new assessment system is presented to evaluate infrastructure objects such as roads after natural disasters in near-realtime. A particular aim is the exploitation of multi-sensorial and multi-temporal imagery together with further {GIS-}data in a comprehensive assessment framewor...

  1. Estimation of corn yield using multi-temporal optical and radar satellite data and artificial neural networks

    Science.gov (United States)

    Fieuzal, R.; Marais Sicre, C.; Baup, F.

    2017-05-01

    The yield forecasting of corn constitutes a key issue in agricultural management, particularly in the context of demographic pressure and climate change. This study presents two methods to estimate yields using artificial neural networks: a diagnostic approach based on all the satellite data acquired throughout the agricultural season, and a real-time approach, where estimates are updated after each image was acquired in the microwave and optical domains (Formosat-2, Spot-4/5, TerraSAR-X, and Radarsat-2) throughout the crop cycle. The results are based on the Multispectral Crop Monitoring experimental campaign conducted by the CESBIO (Centre d'Études de la BIOsphère) laboratory in 2010 over an agricultural region in southwestern France. Among the tested sensor configurations (multi-frequency, multi-polarization or multi-source data), the best yield estimation performance (using the diagnostic approach) is obtained with reflectance acquired in the red wavelength region, with a coefficient of determination of 0.77 and an RMSE of 6.6 q ha-1. In the real-time approach the combination of red reflectance and CHH backscattering coefficients provides the best compromise between the accuracy and earliness of the yield estimate (more than 3 months before the harvest), with an R2 of 0.69 and an RMSE of 7.0 q ha-1 during the development of the central stem. The two best yield estimates are similar in most cases (for more than 80% of the monitored fields), and the differences are related to discrepancies in the crop growth cycle and/or the consequences of pests.

  2. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  3. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  4. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis

  5. Estimating Vegetation Primary Production in the Heihe River Basin of China with Multi-Source and Multi-Scale Data.

    Directory of Open Access Journals (Sweden)

    Tianxiang Cui

    Full Text Available Estimating gross primary production (GPP and net primary production (NPP are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR and its fraction absorbed by vegetation (FPAR using a light use efficiency (LUE model. The autotrophic respiration (Ra was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2 d(-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2 d(-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of

  6. A multi-method and multi-scale approach for estimating city-wide anthropogenic heat fluxes

    Science.gov (United States)

    Chow, Winston T. L.; Salamanca, Francisco; Georgescu, Matei; Mahalov, Alex; Milne, Jeffrey M.; Ruddell, Benjamin L.

    2014-12-01

    A multi-method approach estimating summer waste heat emissions from anthropogenic activities (QF) was applied for a major subtropical city (Phoenix, AZ). These included detailed, quality-controlled inventories of city-wide population density and traffic counts to estimate waste heat emissions from population and vehicular sources respectively, and also included waste heat simulations derived from urban electrical consumption generated by a coupled building energy - regional climate model (WRF-BEM + BEP). These component QF data were subsequently summed and mapped through Geographic Information Systems techniques to enable analysis over local (i.e. census-tract) and regional (i.e. metropolitan area) scales. Through this approach, local mean daily QF estimates compared reasonably versus (1.) observed daily surface energy balance residuals from an eddy covariance tower sited within a residential area and (2.) estimates from inventory methods employed in a prior study, with improved sensitivity to temperature and precipitation variations. Regional analysis indicates substantial variations in both mean and maximum daily QF, which varied with urban land use type. Average regional daily QF was ∼13 W m-2 for the summer period. Temporal analyses also indicated notable differences using this approach with previous estimates of QF in Phoenix over different land uses, with much larger peak fluxes averaging ∼50 W m-2 occurring in commercial or industrial areas during late summer afternoons. The spatio-temporal analysis of QF also suggests that it may influence the form and intensity of the Phoenix urban heat island, specifically through additional early evening heat input, and by modifying the urban boundary layer structure through increased turbulence.

  7. Direct estimation of patient attributes from anatomical MRI based on multi-atlas voting

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2016-01-01

    Full Text Available MRI brain atlases are widely used for automated image segmentation, and in particular, recent developments in multi-atlas techniques have shown highly accurate segmentation results. In this study, we extended the role of the atlas library from mere anatomical reference to a comprehensive knowledge database with various patient attributes, such as demographic, functional, and diagnostic information. In addition to using the selected (heavily-weighted atlases to achieve high segmentation accuracy, we tested whether the non-anatomical attributes of the selected atlases could be used to estimate patient attributes. This can be considered a context-based image retrieval (CBIR approach, embedded in the multi-atlas framework. We first developed an image similarity measurement to weigh the atlases on a structure-by-structure basis, and then, the attributes of the multiple atlases were weighted to estimate the patient attributes. We tested this concept first by estimating age in a normal population; we then performed functional and diagnostic estimations in Alzheimer's disease patients. The accuracy of the estimated patient attributes was measured against the actual clinical data, and the performance was compared to conventional volumetric analysis. The proposed CBIR framework by multi-atlas voting would be the first step toward a knowledge-based support system for quantitative radiological image reading and diagnosis.

  8. Direct estimation of patient attributes from anatomical MRI based on multi-atlas voting.

    Science.gov (United States)

    Wu, Dan; Ceritoglu, Can; Miller, Michael I; Mori, Susumu

    MRI brain atlases are widely used for automated image segmentation, and in particular, recent developments in multi-atlas techniques have shown highly accurate segmentation results. In this study, we extended the role of the atlas library from mere anatomical reference to a comprehensive knowledge database with various patient attributes, such as demographic, functional, and diagnostic information. In addition to using the selected (heavily-weighted) atlases to achieve high segmentation accuracy, we tested whether the non-anatomical attributes of the selected atlases could be used to estimate patient attributes. This can be considered a context-based image retrieval (CBIR) approach, embedded in the multi-atlas framework. We first developed an image similarity measurement to weigh the atlases on a structure-by-structure basis, and then, the attributes of the multiple atlases were weighted to estimate the patient attributes. We tested this concept first by estimating age in a normal population; we then performed functional and diagnostic estimations in Alzheimer's disease patients. The accuracy of the estimated patient attributes was measured against the actual clinical data, and the performance was compared to conventional volumetric analysis. The proposed CBIR framework by multi-atlas voting would be the first step toward a knowledge-based support system for quantitative radiological image reading and diagnosis.

  9. A hybrid spatio-temporal data indexing method for trajectory databases.

    Science.gov (United States)

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-07-21

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.

  10. A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases

    Directory of Open Access Journals (Sweden)

    Shengnan Ke

    2014-07-01

    Full Text Available In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.

  11. A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases

    Science.gov (United States)

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  12. MODIS multi-temporal data retrieval and processing toolbox

    NARCIS (Netherlands)

    Mattiuzzi, M.; Verbesselt, J.; Klisch, A.

    2012-01-01

    The package functionalities are focused for the download and processing of multi-temporal datasets from MODIS sensors. All standard MODIS grid data can be accessed and processed by the package routines. The package is still in alpha development and not all the functionalities are available for now.

  13. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  14. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  15. Multi-temporal Linkages of Net Ecosystem Exchanges (NEE) with the Climatic and Ecohydrologic Drivers in a Florida Everglades Short-hydroperiod Freshwater Marsh

    Science.gov (United States)

    Zaki, M. T.; Abdul-Aziz, O. I.; Ishtiaq, K. S.

    2017-12-01

    Wetlands are considered one of the most productive and ecologically valuable ecosystems on earth. We investigated the multi-temporal linkages of net ecosystem exchange (NEE) with the relevant climatic and ecohydrological drivers for a Florida Everglades short-hydroperiod freshwater wetland. Hourly NEE observations and the associated driving variables during 2008-12 were collected from the AmeriFlux and EDEN databases, and then averaged for the four temporal scales (1-day, 8-day, 15-day, and 30-day). Pearson correlation and factor analysis were employed to identify the interrelations and grouping patterns among the participatory variables for each time scale. The climatic and ecohydrological linkages of NEE were then reliably estimated using bootstrapped (1000 iterations) partial least squares regressions by resolving multicollinearity. The analytics identified four bio-physical components exhibiting relatively robust interrelations and grouping patterns with NEE across the temporal scales. In general, NEE was most strongly linked with the `radiation-energy (RE)' component, while having a moderate linkage with the `temperature-hydrology (TH)' and `aerodynamic (AD)' components. However, the `ambient atmospheric CO2 (AC)' component was very weakly linked to NEE. Further, RE and TH had a decreasing trend with the increasing time scales (1-30 days). In contrast, the linkages of AD and AC components increased from 1-day to 8-day scales, and then remained relatively invariable at the longer scales of aggregation. The estimated linkages provide insights into the dominant biophysical process components and drivers of ecosystem carbon in the Everglades. The invariant linking pattern and linkages would help to develop low-dimensional models to reliably predict CO2 fluxes from the tidal freshwater wetlands.

  16. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data.

    Science.gov (United States)

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-02-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.

  17. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    Science.gov (United States)

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  18. Spatio-Temporal Database of Places Located in the Border Area

    Directory of Open Access Journals (Sweden)

    Albina Mościcka

    2018-03-01

    Full Text Available As a result of changes in boundaries, the political affiliation of locations also changes. Data on such locations are now collected in datasets with reference to the present or to the past space. Therefore, they can refer to localities that either no longer exist, have a different name now, or lay outside of the current borders of the country. Moreover, thematic data describing the past are related to events, customs, items that are always “somewhere”. Storytelling about the past is incomplete without knowledge about the places in which the given story has happened. Therefore, the objective of the article is to discuss the concept of spatio-temporal database for border areas as an “engine” for visualization of thematic data in time-oriented geographical space. The paper focuses on studying the place names on the Polish-Ukrainian border, analyzing the changes that have occurred in this area over the past 80 years (where there were three different countries during this period, and defining the changeability rules. As a result of the research, the architecture of spatio-temporal databases is defined, as well as the rules for using them for data geovisualisation in historical context.

  19. Visually defining and querying consistent multi-granular clinical temporal abstractions.

    Science.gov (United States)

    Combi, Carlo; Oliboni, Barbara

    2012-02-01

    The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on

  20. Spatio-temporal patterns of Cu contamination in mosses using geostatistical estimation

    International Nuclear Information System (INIS)

    Martins, Anabela; Figueira, Rui; Sousa, António Jorge; Sérgio, Cecília

    2012-01-01

    Several recent studies have reported temporal trends in metal contamination in mosses, but such assessments did not evaluate uncertainty in temporal changes, therefore providing weak statistical support for time comparisons. Furthermore, levels of contaminants in the environment change in both space and time, requiring space-time modelling methods for map estimation. We propose an indicator of spatial and temporal variation based on space-time estimation by indicator kriging, where uncertainty at each location is estimated from the local distribution function, thereby calculating variability intervals for comparison between several biomonitoring dates. This approach was exemplified using copper concentrations in mosses from four Portuguese surveys (1992, 1997, 2002 and 2006). Using this approach, we identified a general decrease in copper contamination, but spatial patterns were not uniform, and from the uncertainty intervals, changes could not be considered significant in the majority of the study area. - Highlights: ► We estimated copper contamination in mosses by spatio-temporal kriging between 1992 and 2006. ► We determined local distribution functions to define variation intervals at each location. ► Significance of temporal changes is assessed using an indicator based on uncertainty interval. ► There is general decrease in copper contamination, but spatial patterns are not uniform. - The contamination of copper in mosses was estimated by spatio-temporal kriging, with determination of uncertainty classes in the temporal variation.

  1. An Interactive Multi-instrument Database of Solar Flares

    Energy Technology Data Exchange (ETDEWEB)

    Sadykov, Viacheslav M; Kosovichev, Alexander G; Oria, Vincent; Nita, Gelu M [Center for Computational Heliophysics, New Jersey Institute of Technology, Newark, NJ 07102 (United States)

    2017-07-01

    Solar flares are complicated physical phenomena that are observable in a broad range of the electromagnetic spectrum, from radio waves to γ -rays. For a more comprehensive understanding of flares, it is necessary to perform a combined multi-wavelength analysis using observations from many satellites and ground-based observatories. For an efficient data search, integration of different flare lists, and representation of observational data, we have developed the Interactive Multi-Instrument Database of Solar Flares (IMIDSF, https://solarflare.njit.edu/). The web-accessible database is fully functional and allows the user to search for uniquely identified flare events based on their physical descriptors and the availability of observations by a particular set of instruments. Currently, the data from three primary flare lists ( Geostationary Operational Environmental Satellites , RHESSI , and HEK) and a variety of other event catalogs ( Hinode , Fermi GBM, Konus- W IND, the OVSA flare catalogs, the CACTus CME catalog, the Filament eruption catalog) and observing logs ( IRIS and Nobeyama coverage) are integrated, and an additional set of physical descriptors (temperature and emission measure) is provided along with an observing summary, data links, and multi-wavelength light curves for each flare event since 2002 January. We envision that this new tool will allow researchers to significantly speed up the search of events of interest for statistical and case studies.

  2. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    Science.gov (United States)

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or

  3. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  4. Pursuit of a scalable high performance multi-petabyte database

    CERN Document Server

    Hanushevsky, A

    1999-01-01

    When the BaBar experiment at the Stanford Linear Accelerator Center starts in April 1999, it will generate approximately 200 TB/year of data at a rate of 10 MB/sec for 10 years. A mere six years later, CERN, the European Laboratory for Particle Physics, will start an experiment whose data storage requirements are two orders of magnitude larger. In both experiments, all of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). The quantity and rate at which the data is produced requires the use of a high performance hierarchical mass storage system in place of a standard Unix file system. Furthermore, the distributed nature of the experiment, involving scientists from 80 Institutions in 10 countries, also requires an extended security infrastructure not commonly found in standard Unix file systems. The combination of challenges that must be overcome in order to effectively deal with a multi-petabyte object oriented database is substantial. Our particular approach...

  5. Comparative Study of Temporal Aspects of Data Collection Using Remote Database vs. Abstract Database in WSN

    Directory of Open Access Journals (Sweden)

    Abderrahmen BELFKIH

    2015-06-01

    Full Text Available The real-time aspect is an important factor for Wireless Sensor Network (WSN applications, which demand a timely data delivery, to reflect the current state of the environment. However, most existing works in the field of sensor networks are only interested in data processing and energy consumption, to improve the lifetime of WSN. Extensive researches have been done on data processing in WSN. Some works have been interested in improving data collection methods, and others have focused on various query processing techniques. In recent years, the query processing using abstract database technology like Cougar and TinyDB has shown efficient results in many researches. This paper presents a study of timing properties through two data processing techniques for WSN. The first is based on a warehousing approach, in which data are collected and stored on a remote database. The second is based on query processing using an abstract database like TinyDB. This study has allowed us to identify some factors which enhance the respect of temporal constraints.

  6. A Method for Estimating the Aerodynamic Roughness Length with NDVI and BRDF Signatures Using Multi-Temporal Proba-V Data

    Directory of Open Access Journals (Sweden)

    Mingzhao Yu

    2016-12-01

    Full Text Available Aerodynamic roughness length is an important parameter for surface fluxes estimates. This paper developed an innovative method for estimation of aerodynamic roughness length (z0m over farmland with a new vegetation index, the Hot-darkspot Vegetation Index (HDVI. To obtain this new index, the normalized-difference hot-darkspot index (NDHD is introduced using a semi-empirical, kernel-driven bidirectional reflectance model with multi-temporal Proba-V 300-m top-of-canopy (TOC reflectance products. A linear relationship between HDVI and z0m was found during the crop growth period. Wind profiles data from two field automatic weather station (AWS were used to calibrate the model: one site is in Guantao County in Hai Basin, in which double-cropping systems and crop rotations with summer maize and winter wheat are implemented; the other is in the middle reach of the Heihe River Basin from the Heihe Watershed Allied Telemetry Experimental Research (HiWATER project, with the main crop of spring maize. The iterative algorithm based on Monin–Obukhov similarity theory is employed to calculate the field z0m from time series. Results show that the relationship between HDVI and z0m is more pronounced than that between NDVI and z0m for spring maize at Yingke site, with an R2 value that improved from 0.636 to 0.772. At Guantao site, HDVI also exhibits better performance than NDVI, with R2 increasing from 0.630 to 0.793 for summer maize and from 0.764 to 0.790 for winter wheat. HDVI can capture the impacts of crop residue on z0m, whereas NDVI cannot.

  7. Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.

    Science.gov (United States)

    Filby, A. M. Iliana

    1996-01-01

    Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…

  8. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  9. Estimating the state of large spatio-temporally chaotic systems

    International Nuclear Information System (INIS)

    Ott, E.; Hunt, B.R.; Szunyogh, I.; Zimin, A.V.; Kostelich, E.J.; Corazza, M.; Kalnay, E.; Patil, D.J.; Yorke, J.A.

    2004-01-01

    We consider the estimation of the state of a large spatio-temporally chaotic system from noisy observations and knowledge of a system model. Standard state estimation techniques using the Kalman filter approach are not computationally feasible for systems with very many effective degrees of freedom. We present and test a new technique (called a Local Ensemble Kalman Filter), generally applicable to large spatio-temporally chaotic systems for which correlations between system variables evaluated at different points become small at large separation between the points

  10. Bamboo mapping of Ethiopia, Kenya and Uganda for the year 2016 using multi-temporal Landsat imagery

    Science.gov (United States)

    Zhao, Yuanyuan; Feng, Duole; Jayaraman, Durai; Belay, Daniel; Sebrala, Heiru; Ngugi, John; Maina, Eunice; Akombo, Rose; Otuoma, John; Mutyaba, Joseph; Kissa, Sam; Qi, Shuhua; Assefa, Fiker; Oduor, Nellie Mugure; Ndawula, Andrew Kalema; Li, Yanxia; Gong, Peng

    2018-04-01

    Mapping the spatial distribution of bamboo in East Africa is necessary for biodiversity conservation, resource management and policy making for rural poverty reduction. In this study, we produced a contemporary bamboo cover map of Ethiopia, Kenya and Uganda for the year 2016 using multi-temporal Landsat imagery series at 30 m spatial resolution. This is the first bamboo map generated using remotely sensed data for these three East African countries that possess most of the African bamboo resource. The producer's and user's accuracies of bamboos are 79.2% and 84.0%, respectively. The hotspots with large amounts of bamboo were identified and the area of bamboo coverage for each region was estimated according to the map. The seasonal growth status of two typical bamboo zones (one highland bamboo and one lowland bamboo) were analyzed and the multi-temporal imagery proved to be useful in differentiating bamboo from other vegetation classes. The images acquired in September to February are less contaminated by clouds and shadows, and the image series cover the dying back process of lowland bamboo, which were helpful for bamboo identification in East Africa.

  11. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  12. Variational multi-valued velocity field estimation for transparent sequences

    DEFF Research Database (Denmark)

    Ramírez-Manzanares, Alonso; Rivera, Mariano; Kornprobst, Pierre

    2011-01-01

    Motion estimation in sequences with transparencies is an important problem in robotics and medical imaging applications. In this work we propose a variational approach for estimating multi-valued velocity fields in transparent sequences. Starting from existing local motion estimators, we derive...... a variational model for integrating in space and time such a local information in order to obtain a robust estimation of the multi-valued velocity field. With this approach, we can indeed estimate multi-valued velocity fields which are not necessarily piecewise constant on a layer –each layer can evolve...

  13. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    Science.gov (United States)

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  14. Multi-pitch Estimation using Semidefinite Programming

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Vandenberghe, Lieven

    2017-01-01

    assuming a Nyquist sampled signal by adding an additional semidefinite constraint. We show that the proposed estimator has superior performance compared to state- of-the-art methods for separating two closely spaced fundamentals and approximately achieves the asymptotic Cramér-Rao lower bound.......Multi-pitch estimation concerns the problem of estimating the fundamental frequencies (pitches) and amplitudes/phases of multiple superimposed harmonic signals with application in music, speech, vibration analysis etc. In this paper we formulate a complex-valued multi-pitch estimator via...... a semidefinite programming representation of an atomic decomposition over a continuous dictionary of complex exponentials and extend this to real-valued data via a real semidefinite pro-ram with the same dimensions (i.e. half the size). We further impose a continuous frequency constraint naturally occurring from...

  15. Design of multi-tiered database application based on CORBA component in SDUV-FEL system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin

    2004-01-01

    The drawback of usual two-tiered database architecture was analyzed and the Shanghai Deep Ultraviolet-Free Electron Laser database system under development was discussed. A project for realizing the multi-tiered database architecture based on common object request broker architecture (CORBA) component and middleware model constructed by C++ was presented. A magnet database was given to exhibit the design of the CORBA component. (authors)

  16. Azimuth Ambiguities Removal in Littoral Zones Based on Multi-Temporal SAR Images

    Directory of Open Access Journals (Sweden)

    Xiangguang Leng

    2017-08-01

    Full Text Available Synthetic aperture radar (SAR is one of the most important techniques for ocean monitoring. Azimuth ambiguities are a real problem in SAR images today, which can cause performance degradation in SAR ocean applications. In particular, littoral zones can be strongly affected by land-based sources, whereas they are usually regions of interest (ROI. Given the presence of complexity and diversity in littoral zones, azimuth ambiguities removal is a tough problem. As SAR sensors can have a repeat cycle, multi-temporal SAR images provide new insight into this problem. A method for azimuth ambiguities removal in littoral zones based on multi-temporal SAR images is proposed in this paper. The proposed processing chain includes co-registration, local correlation, binarization, masking, and restoration steps. It is designed to remove azimuth ambiguities caused by fixed land-based sources. The idea underlying the proposed method is that sea surface is dynamic, whereas azimuth ambiguities caused by land-based sources are constant. Thus, the temporal consistence of azimuth ambiguities is higher than sea clutter. It opens up the possibilities to use multi-temporal SAR data to remove azimuth ambiguities. The design of the method and the experimental procedure are based on images from the Sentinel data hub of Europe Space Agency (ESA. Both Interferometric Wide Swath (IW and Stripmap (SM mode images are taken into account to validate the proposed method. This paper also presents two RGB composition methods for better azimuth ambiguities visualization. Experimental results show that the proposed method can remove azimuth ambiguities in littoral zones effectively.

  17. Multi-Temporal Multi-Sensor Analysis of Urbanization and Environmental/Climate Impact in China for Sustainable Urban Development

    Science.gov (United States)

    Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun

    2016-08-01

    The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge

  18. Probing temporal aspects of high-order harmonic pulses via multi-colour, multi-photon ionization processes

    Energy Technology Data Exchange (ETDEWEB)

    Mauritsson, J [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Johnsson, P [Department of Physics, Lund Institute of Technology, PO Box 118, SE-22100 Lund (Sweden); Lopez-Martens, R [Department of Physics, Lund Institute of Technology, PO Box 118, SE-22100 Lund (Sweden); Varju, K [Department of Physics, Lund Institute of Technology, PO Box 118, SE-22100 Lund (Sweden); L' Huillier, A [Department of Physics, Lund Institute of Technology, PO Box 118, SE-22100 Lund (Sweden); Gaarde, M B [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Schafer, K J [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States)

    2005-07-14

    High-order harmonics generated through the interaction of atoms and strong laser fields are a versatile, laboratory-scale source of extreme ultraviolet (XUV) radiation on a femtosecond or even attosecond time-scale. In order to be a useful experimental tool, however, this radiation has to be well characterized, both temporally and spectrally. In this paper we discuss how multi-photon, multi-colour ionization processes can be used to completely characterize either individual harmonics or attosecond pulse trains. In particular, we discuss the influence of the intensity and duration of the probe laser, and how these parameters effect the accuracy of the XUV characterization.

  19. Probing temporal aspects of high-order harmonic pulses via multi-colour, multi-photon ionization processes

    International Nuclear Information System (INIS)

    Mauritsson, J; Johnsson, P; Lopez-Martens, R; Varju, K; L'Huillier, A; Gaarde, M B; Schafer, K J

    2005-01-01

    High-order harmonics generated through the interaction of atoms and strong laser fields are a versatile, laboratory-scale source of extreme ultraviolet (XUV) radiation on a femtosecond or even attosecond time-scale. In order to be a useful experimental tool, however, this radiation has to be well characterized, both temporally and spectrally. In this paper we discuss how multi-photon, multi-colour ionization processes can be used to completely characterize either individual harmonics or attosecond pulse trains. In particular, we discuss the influence of the intensity and duration of the probe laser, and how these parameters effect the accuracy of the XUV characterization

  20. MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

    Directory of Open Access Journals (Sweden)

    X. Liu

    2012-08-01

    Full Text Available Image Matching is often one of the first tasks in many Photogrammetry and Remote Sensing applications. This paper presents an efficient approach to automated multi-temporal and multi-sensor image matching based on local frequency information. Two new independent image representations, Local Average Phase (LAP and Local Weighted Amplitude (LWA, are presented to emphasize the common scene information, while suppressing the non-common illumination and sensor-dependent information. In order to get the two representations, local frequency information is firstly obtained from Log-Gabor wavelet transformation, which is similar to that of the human visual system; then the outputs of odd and even symmetric filters are used to construct the LAP and LWA. The LAP and LWA emphasize on the phase and amplitude information respectively. As these two representations are both derivative-free and threshold-free, they are robust to noise and can keep as much of the image details as possible. A new Compositional Similarity Measure (CSM is also presented to combine the LAP and LWA with the same weight for measuring the similarity of multi-temporal and multi-sensor images. The CSM can make the LAP and LWA compensate for each other and can make full use of the amplitude and phase of local frequency information. In many image matching applications, the template is usually selected without consideration of its matching robustness and accuracy. In order to overcome this problem, a local best matching point detection is presented to detect the best matching template. In the detection method, we employ self-similarity analysis to identify the template with the highest matching robustness and accuracy. Experimental results using some real images and simulation images demonstrate that the presented approach is effective for matching image pairs with significant scene and illumination changes and that it has advantages over other state-of-the-art approaches, which include: the

  1. Content-based organization of the information space in multi-database networks

    NARCIS (Netherlands)

    Papazoglou, M.; Milliner, S.

    1998-01-01

    Abstract. Rapid growth in the volume of network-available data, complexity, diversity and terminological fluctuations, at different data sources, render network-accessible information increasingly difficult to achieve. The situation is particularly cumbersome for users of multi-database systems who

  2. Multi tenancy for cloud-based in-memory column databases workload management and data placement

    CERN Document Server

    Schaffner, Jan

    2014-01-01

    With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using ""multi tenancy,"" a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently

  3. Monitoring Powdery Mildew of Winter Wheat by Using Moderate Resolution Multi-Temporal Satellite Imagery

    Science.gov (United States)

    Zhang, Jingcheng; Pu, Ruiliang; Yuan, Lin; Wang, Jihua; Huang, Wenjiang; Yang, Guijun

    2014-01-01

    Powdery mildew is one of the most serious diseases that have a significant impact on the production of winter wheat. As an effective alternative to traditional sampling methods, remote sensing can be a useful tool in disease detection. This study attempted to use multi-temporal moderate resolution satellite-based data of surface reflectances in blue (B), green (G), red (R) and near infrared (NIR) bands from HJ-CCD (CCD sensor on Huanjing satellite) to monitor disease at a regional scale. In a suburban area in Beijing, China, an extensive field campaign for disease intensity survey was conducted at key growth stages of winter wheat in 2010. Meanwhile, corresponding time series of HJ-CCD images were acquired over the study area. In this study, a number of single-stage and multi-stage spectral features, which were sensitive to powdery mildew, were selected by using an independent t-test. With the selected spectral features, four advanced methods: mahalanobis distance, maximum likelihood classifier, partial least square regression and mixture tuned matched filtering were tested and evaluated for their performances in disease mapping. The experimental results showed that all four algorithms could generate disease maps with a generally correct distribution pattern of powdery mildew at the grain filling stage (Zadoks 72). However, by comparing these disease maps with ground survey data (validation samples), all of the four algorithms also produced a variable degree of error in estimating the disease occurrence and severity. Further, we found that the integration of MTMF and PLSR algorithms could result in a significant accuracy improvement of identifying and determining the disease intensity (overall accuracy of 72% increased to 78% and kappa coefficient of 0.49 increased to 0.59). The experimental results also demonstrated that the multi-temporal satellite images have a great potential in crop diseases mapping at a regional scale. PMID:24691435

  4. Multi-temporal LiDAR and Landsat quantification of fire-induced changes to forest structure

    Science.gov (United States)

    McCarley, T. Ryan; Kolden, Crystal A.; Vaillant, Nicole M.; Hudak, Andrew T.; Smith, Alistair M.S.; Wing, Brian M.; Kellogg, Bryce; Kreitler, Jason R.

    2017-01-01

    Measuring post-fire effects at landscape scales is critical to an ecological understanding of wildfire effects. Predominantly this is accomplished with either multi-spectral remote sensing data or through ground-based field sampling plots. While these methods are important, field data is usually limited to opportunistic post-fire observations, and spectral data often lacks validation with specific variables of change. Additional uncertainty remains regarding how best to account for environmental variables influencing fire effects (e.g., weather) for which observational data cannot easily be acquired, and whether pre-fire agents of change such as bark beetle and timber harvest impact model accuracy. This study quantifies wildfire effects by correlating changes in forest structure derived from multi-temporal Light Detection and Ranging (LiDAR) acquisitions to multi-temporal spectral changes captured by the Landsat Thematic Mapper and Operational Land Imager for the 2012 Pole Creek Fire in central Oregon. Spatial regression modeling was assessed as a methodology to account for spatial autocorrelation, and model consistency was quantified across areas impacted by pre-fire mountain pine beetle and timber harvest. The strongest relationship (pseudo-r2 = 0.86, p LiDAR-derived estimate of canopy cover change. Relationships between percentage of LiDAR returns in forest strata and spectral indices generally increased in strength with strata height. Structural measurements made closer to the ground were not well correlated. The spatial regression approach improved all relationships, demonstrating its utility, but model performance declined across pre-fire agents of change, suggesting that such studies should stratify by pre-fire forest condition. This study establishes that spectral indices such as d74 and dNBR are most sensitive to wildfire-caused structural changes such as reduction in canopy cover and perform best when that structure has not been reduced pre-fire.

  5. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar

  6. Spatio-Temporal Series Remote Sensing Image Prediction Based on Multi-Dictionary Bayesian Fusion

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-11-01

    Full Text Available Contradictions in spatial resolution and temporal coverage emerge from earth observation remote sensing images due to limitations in technology and cost. Therefore, how to combine remote sensing images with low spatial yet high temporal resolution as well as those with high spatial yet low temporal resolution to construct images with both high spatial resolution and high temporal coverage has become an important problem called spatio-temporal fusion problem in both research and practice. A Multi-Dictionary Bayesian Spatio-Temporal Reflectance Fusion Model (MDBFM has been proposed in this paper. First, multiple dictionaries from regions of different classes are trained. Second, a Bayesian framework is constructed to solve the dictionary selection problem. A pixel-dictionary likehood function and a dictionary-dictionary prior function are constructed under the Bayesian framework. Third, remote sensing images before and after the middle moment are combined to predict images at the middle moment. Diverse shapes and textures information is learned from different landscapes in multi-dictionary learning to help dictionaries capture the distinctions between regions. The Bayesian framework makes full use of the priori information while the input image is classified. The experiments with one simulated dataset and two satellite datasets validate that the MDBFM is highly effective in both subjective and objective evaluation indexes. The results of MDBFM show more precise details and have a higher similarity with real images when dealing with both type changes and phenology changes.

  7. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    International Nuclear Information System (INIS)

    Viegas, F; Nairz, A; Goossens, L; Malon, D; Cranshaw, J; Dimitrov, G; Nowak, M; Gamboa, C; Gallas, E; Wong, A; Vinek, E

    2010-01-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  8. Long-term ground deformation patterns of Bucharest using multi-temporal InSAR and multivariate dynamic analyses: a possible transpressional system?

    Science.gov (United States)

    Armaş, Iuliana; Mendes, Diana A.; Popa, Răzvan-Gabriel; Gheorghe, Mihaela; Popovici, Diana

    2017-03-01

    The aim of this exploratory research is to capture spatial evolution patterns in the Bucharest metropolitan area using sets of single polarised synthetic aperture radar (SAR) satellite data and multi-temporal radar interferometry. Three sets of SAR data acquired during the years 1992-2010 from ERS-1/-2 and ENVISAT, and 2011-2014 from TerraSAR-X satellites were used in conjunction with the Small Baseline Subset (SBAS) and persistent scatterers (PS) high-resolution multi-temporal interferometry (InSAR) techniques to provide maps of line-of-sight displacements. The satellite-based remote sensing results were combined with results derived from classical methodologies (i.e., diachronic cartography) and field research to study possible trends in developments over former clay pits, landfill excavation sites, and industrial parks. The ground displacement trend patterns were analysed using several linear and nonlinear models, and techniques. Trends based on the estimated ground displacement are characterised by long-term memory, indicated by low noise Hurst exponents, which in the long-term form interesting attractors. We hypothesize these attractors to be tectonic stress fields generated by transpressional movements.

  9. Sparse Multi-Pitch and Panning Estimation of Stereophonic Signals

    DEFF Research Database (Denmark)

    Kronvall, Ted; Jakobsson, Andreas; Hansen, Martin Weiss

    2016-01-01

    In this paper, we propose a novel multi-pitch estimator for stereophonic mixtures, allowing for pitch estimation on multi-channel audio even if the amplitude and delay panning parameters are unknown. The presented method does not require prior knowledge of the number of sources present in the mix...

  10. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    Energy Technology Data Exchange (ETDEWEB)

    Viegas, F; Nairz, A; Goossens, L [CERN, CH-1211 Geneve 23 (Switzerland); Malon, D; Cranshaw, J [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Dimitrov, G [DESY, D-22603 Hamburg (Germany); Nowak, M; Gamboa, C [Brookhaven National Laboratory, PO Box 5000 Upton, NY 11973-5000 (United States); Gallas, E [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Wong, A [Triumf, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3 (Canada); Vinek, E [University of Vienna, Dr.-Karl-Lueger-Ring 1, 1010 Vienna (Austria)

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  11. Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database Systems

    OpenAIRE

    Martina-Cezara Albutiu, Alfons Kemper, Thomas Neumann

    2012-01-01

    Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I/O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research ...

  12. Temporal change in fragmentation of continental US forests

    Science.gov (United States)

    James D. Wickham; Kurt H. Riitters; Timothy G. Wade; Collin Homer

    2008-01-01

    Changes in forest ecosystem function and condition arise from changes in forest fragmentation. Previous studies estimated forest fragmentation for the continental United States (US). In this study, new temporal land-cover data from the National Land Cover Database (NLCD) were used to estimate changes in forest fragmentation at multiple scales for the continental US....

  13. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    Science.gov (United States)

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  14. A Spatio-Temporal Building Exposure Database and Information Life-Cycle Management Solution

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2017-04-01

    Full Text Available With an ever-increasing volume and complexity of data collected from a variety of sources, the efficient management of geospatial information becomes a key topic in disaster risk management. For example, the representation of assets exposed to natural disasters is subjected to changes throughout the different phases of risk management reaching from pre-disaster mitigation to the response after an event and the long-term recovery of affected assets. Spatio-temporal changes need to be integrated into a sound conceptual and technological framework able to deal with data coming from different sources, at varying scales, and changing in space and time. Especially managing the information life-cycle, the integration of heterogeneous information and the distributed versioning and release of geospatial information are important topics that need to become essential parts of modern exposure modelling solutions. The main purpose of this study is to provide a conceptual and technological framework to tackle the requirements implied by disaster risk management for describing exposed assets in space and time. An information life-cycle management solution is proposed, based on a relational spatio-temporal database model coupled with Git and GeoGig repositories for distributed versioning. Two application scenarios focusing on the modelling of residential building stocks are presented to show the capabilities of the implemented solution. A prototype database model is shared on GitHub along with the necessary scenario data.

  15. Individual-based versus aggregate meta-analysis in multi-database studies of pregnancy outcomes

    DEFF Research Database (Denmark)

    Selmer, Randi; Haglund, Bengt; Furu, Kari

    2016-01-01

    Purpose: Compare analyses of a pooled data set on the individual level with aggregate meta-analysis in a multi-database study. Methods: We reanalysed data on 2.3 million births in a Nordic register based cohort study. We compared estimated odds ratios (OR) for the effect of selective serotonin...... covariates in the pooled data set, and 1.53 (1.19–1.96) after country-optimized adjustment. Country-specific adjusted analyses at the substance level were not possible for RVOTO. Conclusion: Results of fixed effects meta-analysis and individual-based analyses of a pooled dataset were similar in this study...... reuptake inhibitors (SSRI) and venlafaxine use in pregnancy on any cardiovascular birth defect and the rare outcome right ventricular outflow tract obstructions (RVOTO). Common covariates included maternal age, calendar year, birth order, maternal diabetes, and co-medication. Additional covariates were...

  16. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Shunping Ji

    2018-01-01

    Full Text Available This study describes a novel three-dimensional (3D convolutional neural networks (CNN based method that automatically classifies crops from spatio-temporal remote sensing images. First, 3D kernel is designed according to the structure of multi-spectral multi-temporal remote sensing data. Secondly, the 3D CNN framework with fine-tuned parameters is designed for training 3D crop samples and learning spatio-temporal discriminative representations, with the full crop growth cycles being preserved. In addition, we introduce an active learning strategy to the CNN model to improve labelling accuracy up to a required threshold with the most efficiency. Finally, experiments are carried out to test the advantage of the 3D CNN, in comparison to the two-dimensional (2D CNN and other conventional methods. Our experiments show that the 3D CNN is especially suitable in characterizing the dynamics of crop growth and outperformed the other mainstream methods.

  17. Multi-Temporal vs. Hyper-Spectral Imaging for Future Land Imaging at 30 m

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to determine the information content of multi-temporal land imaging in discrete Landsat-like spectral bands at 30 m with a 360 km swath width and compare...

  18. Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database Systems

    OpenAIRE

    Albutiu, Martina-Cezara; Kemper, Alfons; Neumann, Thomas

    2012-01-01

    Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I/O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research in scalable massively parallel mult...

  19. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    Science.gov (United States)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  20. Long-Term Quantitative Precipitation Estimates (QPE) at High Spatial and Temporal Resolution over CONUS: Bias-Adjustment of the Radar-Only National Mosaic and Multi-sensor QPE (NMQ/Q2) Precipitation Reanalysis (2001-2012)

    Science.gov (United States)

    Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun

    2015-04-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to

  1. On the relationship between multi-channel envelope and temporal fine structure

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel; Decorsiere, Remi Julien Blaise; Dau, Torsten

    2011-01-01

    The envelope of a signal is broadly defined as the slow changes in time of the signal, where as the temporal fine structure (TFS) are the fast changes in time, i.e. the carrier wave(s) of the signal. The focus of this paper is on envelope and TFS in multi-channel systems. We discuss the differenc...

  2. Validation of temporal and spatial consistency of facility- and speed-specific vehicle-specific power distributions for emission estimation: A case study in Beijing, China.

    Science.gov (United States)

    Zhai, Zhiqiang; Song, Guohua; Lu, Hongyu; He, Weinan; Yu, Lei

    2017-09-01

    Vehicle-specific power (VSP) has been found to be highly correlated with vehicle emissions. It is used in many studies on emission modeling such as the MOVES (Motor Vehicle Emissions Simulator) model. The existing studies develop specific VSP distributions (or OpMode distribution in MOVES) for different road types and various average speeds to represent the vehicle operating modes on road. However, it is still not clear if the facility- and speed-specific VSP distributions are consistent temporally and spatially. For instance, is it necessary to update periodically the database of the VSP distributions in the emission model? Are the VSP distributions developed in the city central business district (CBD) area applicable to its suburb area? In this context, this study examined the temporal and spatial consistency of the facility- and speed-specific VSP distributions in Beijing. The VSP distributions in different years and in different areas are developed, based on real-world vehicle activity data. The root mean square error (RMSE) is employed to quantify the difference between the VSP distributions. The maximum differences of the VSP distributions between different years and between different areas are approximately 20% of that between different road types. The analysis of the carbon dioxide (CO 2 ) emission factor indicates that the temporal and spatial differences of the VSP distributions have no significant impact on vehicle emission estimation, with relative error of less than 3%. The temporal and spatial differences have no significant impact on the development of the facility- and speed-specific VSP distributions for the vehicle emission estimation. The database of the specific VSP distributions in the VSP-based emission models can maintain in terms of time. Thus, it is unnecessary to update the database regularly, and it is reliable to use the history vehicle activity data to forecast the emissions in the future. In one city, the areas with less data can still

  3. Estimation of effective connectivity using multi-layer perceptron artificial neural network.

    Science.gov (United States)

    Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman

    2018-02-01

    Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.

  4. Estimation of National Colorectal-Cancer Incidence Using Claims Databases

    International Nuclear Information System (INIS)

    Quantin, C.; Benzenine, E.; Hagi, M.; Auverlot, B.; Cottenet, J.; Binquet, M.; Compain, D.

    2012-01-01

    The aim of the study was to assess the accuracy of the colorectal-cancer incidence estimated from administrative data. Methods. We selected potential incident colorectal-cancer cases in 2004-2005 French administrative data, using two alternative algorithms. The first was based only on diagnostic and procedure codes, whereas the second considered the past history of the patient. Results of both methods were assessed against two corresponding local cancer registries, acting as “gold standards.” We then constructed a multivariable regression model to estimate the corrected total number of incident colorectal-cancer cases from the whole national administrative database. Results. The first algorithm provided an estimated local incidence very close to that given by the regional registries (646 versus 645 incident cases) and had good sensitivity and positive predictive values (about 75% for both). The second algorithm overestimated the incidence by about 50% and had a poor positive predictive value of about 60%. The estimation of national incidence obtained by the first algorithm differed from that observed in 14 registries by only 2.34%. Conclusion. This study shows the usefulness of administrative databases for countries with no national cancer registry and suggests a method for correcting the estimates provided by these data.

  5. Clustering of Multi-Temporal Fully Polarimetric L-Band SAR Data for Agricultural Land Cover Mapping

    Science.gov (United States)

    Tamiminia, H.; Homayouni, S.; Safari, A.

    2015-12-01

    Recently, the unique capabilities of Polarimetric Synthetic Aperture Radar (PolSAR) sensors make them an important and efficient tool for natural resources and environmental applications, such as land cover and crop classification. The aim of this paper is to classify multi-temporal full polarimetric SAR data using kernel-based fuzzy C-means clustering method, over an agricultural region. This method starts with transforming input data into the higher dimensional space using kernel functions and then clustering them in the feature space. Feature space, due to its inherent properties, has the ability to take in account the nonlinear and complex nature of polarimetric data. Several SAR polarimetric features extracted using target decomposition algorithms. Features from Cloude-Pottier, Freeman-Durden and Yamaguchi algorithms used as inputs for the clustering. This method was applied to multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Canada, during June and July in 2012. The results demonstrate the efficiency of this approach with respect to the classical methods. In addition, using multi-temporal data in the clustering process helped to investigate the phenological cycle of plants and significantly improved the performance of agricultural land cover mapping.

  6. High Spatio-Temporal Resolution Bathymetry Estimation and Morphology

    Science.gov (United States)

    Bergsma, E. W. J.; Conley, D. C.; Davidson, M. A.; O'Hare, T. J.

    2015-12-01

    In recent years, bathymetry estimates using video images have become increasingly accurate. With the cBathy code (Holman et al., 2013) fully operational, bathymetry results with 0.5 metres accuracy have been regularly obtained at Duck, USA. cBathy is based on observations of the dominant frequencies and wavelengths of surface wave motions and estimates the depth (and hence allows inference of bathymetry profiles) based on linear wave theory. Despite the good performance at Duck, large discrepancies were found related to tidal elevation and camera height (Bergsma et al., 2014) and on the camera boundaries. A tide dependent floating pixel and camera boundary solution have been proposed to overcome these issues (Bergsma et al., under review). The video-data collection is set estimate depths hourly on a grid with resolution in the order of 10x25 meters. Here, the application of the cBathy at Porthtowan in the South-West of England is presented. Hourly depth estimates are combined and analysed over a period of 1.5 years (2013-2014). In this work the focus is on the sub-tidal region, where the best cBathy results are achieved. The morphology of the sub-tidal bar is tracked with high spatio-temporal resolution on short and longer time scales. Furthermore, the impact of the storm and reset (sudden and large changes in bathymetry) of the sub-tidal area is clearly captured with the depth estimations. This application shows that the high spatio-temporal resolution of cBathy makes it a powerful tool for coastal research and coastal zone management.

  7. Temporal and Fine-Grained Pedestrian Action Recognition on Driving Recorder Database.

    Science.gov (United States)

    Kataoka, Hirokatsu; Satoh, Yutaka; Aoki, Yoshimitsu; Oikawa, Shoko; Matsui, Yasuhiro

    2018-02-20

    The paper presents an emerging issue of fine-grained pedestrian action recognition that induces an advanced pre-crush safety to estimate a pedestrian intention in advance. The fine-grained pedestrian actions include visually slight differences (e.g., walking straight and crossing), which are difficult to distinguish from each other. It is believed that the fine-grained action recognition induces a pedestrian intention estimation for a helpful advanced driver-assistance systems (ADAS). The following difficulties have been studied to achieve a fine-grained and accurate pedestrian action recognition: (i) In order to analyze the fine-grained motion of a pedestrian appearance in the vehicle-mounted drive recorder, a method to describe subtle change of motion characteristics occurring in a short time is necessary; (ii) even when the background moves greatly due to the driving of the vehicle, it is necessary to detect changes in subtle motion of the pedestrian; (iii) the collection of large-scale fine-grained actions is very difficult, and therefore a relatively small database should be focused. We find out how to learn an effective recognition model with only a small-scale database. Here, we have thoroughly evaluated several types of configurations to explore an effective approach in fine-grained pedestrian action recognition without a large-scale database. Moreover, two different datasets have been collected in order to raise the issue. Finally, our proposal attained 91.01% on National Traffic Science and Environment Laboratory database (NTSEL) and 53.23% on the near-miss driving recorder database (NDRDB). The paper has improved +8.28% and +6.53% from baseline two-stream fusion convnets.

  8. Temporal and Fine-Grained Pedestrian Action Recognition on Driving Recorder Database

    Directory of Open Access Journals (Sweden)

    Hirokatsu Kataoka

    2018-02-01

    Full Text Available The paper presents an emerging issue of fine-grained pedestrian action recognition that induces an advanced pre-crush safety to estimate a pedestrian intention in advance. The fine-grained pedestrian actions include visually slight differences (e.g., walking straight and crossing, which are difficult to distinguish from each other. It is believed that the fine-grained action recognition induces a pedestrian intention estimation for a helpful advanced driver-assistance systems (ADAS. The following difficulties have been studied to achieve a fine-grained and accurate pedestrian action recognition: (i In order to analyze the fine-grained motion of a pedestrian appearance in the vehicle-mounted drive recorder, a method to describe subtle change of motion characteristics occurring in a short time is necessary; (ii even when the background moves greatly due to the driving of the vehicle, it is necessary to detect changes in subtle motion of the pedestrian; (iii the collection of large-scale fine-grained actions is very difficult, and therefore a relatively small database should be focused. We find out how to learn an effective recognition model with only a small-scale database. Here, we have thoroughly evaluated several types of configurations to explore an effective approach in fine-grained pedestrian action recognition without a large-scale database. Moreover, two different datasets have been collected in order to raise the issue. Finally, our proposal attained 91.01% on National Traffic Science and Environment Laboratory database (NTSEL and 53.23% on the near-miss driving recorder database (NDRDB. The paper has improved +8.28% and +6.53% from baseline two-stream fusion convnets.

  9. Mapping Multi-Cropped Land Use to Estimate Water Demand Using the California Pesticide Reporting Database

    Science.gov (United States)

    Henson, W.; Baillie, M. N.; Martin, D.

    2017-12-01

    Detailed and dynamic land-use data is one of the biggest data deficiencies facing food and water security issues. Better land-use data results in improved integrated hydrologic models that are needed to look at the feedback between land and water use, specifically for adequately representing changes and dynamics in rainfall-runoff, urban and agricultural water demands, and surface fluxes of water (e.g., evapotranspiration, runoff, and infiltration). Currently, land-use data typically are compiled from annual (e.g., Crop Scape) or multi-year composites if mapped at all. While this approach provides information about interannual land-use practices, it does not capture the dynamic changes in highly developed agricultural lands prevalent in California agriculture such as (1) dynamic land-use changes from high frequency multi-crop rotations and (2) uncertainty in sub-annual crop distribution, planting times, and cropped areas. California has collected spatially distributed data for agricultural pesticide use since 1974 through the California Pesticide Information Portal (CalPIP). A method leveraging the CalPIP database has been developed to provide vital information about dynamic agricultural land use (e.g., crop distribution and planting times) and water demand issues in Salinas Valley, California, along the central coast. This 7 billion dollar/year agricultural area produces up to 50% of U.S. lettuce and broccoli. Therefore, effective and sustainable water resource development in the area must balance the needs of this essential industry, other beneficial uses, and the environment. This new tool provides a way to provide more dynamic crop data in hydrologic models. While the current application focuses on the Salinas Valley, the methods are extensible to all of California and other states with similar pesticide reporting. The improvements in representing variability in crop patterns and associated water demands increase our understanding of land-use change and

  10. Large-Area, High-Resolution Tree Cover Mapping with Multi-Temporal SPOT5 Imagery, New South Wales, Australia

    Directory of Open Access Journals (Sweden)

    Adrian Fisher

    2016-06-01

    Full Text Available Tree cover maps are used for many purposes, such as vegetation mapping, habitat connectivity and fragmentation studies. Small remnant patches of native vegetation are recognised as ecologically important, yet they are underestimated in remote sensing products derived from Landsat. High spatial resolution sensors are capable of mapping small patches of trees, but their use in large-area mapping has been limited. In this study, multi-temporal Satellite pour l’Observation de la Terre 5 (SPOT5 High Resolution Geometrical data was pan-sharpened to 5 m resolution and used to map tree cover for the Australian state of New South Wales (NSW, an area of over 800,000 km2. Complete coverages of SPOT5 panchromatic and multispectral data over NSW were acquired during four consecutive summers (2008–2011 for a total of 1256 images. After pre-processing, the imagery was used to model foliage projective cover (FPC, a measure of tree canopy density commonly used in Australia. The multi-temporal imagery, FPC models and 26,579 training pixels were used in a binomial logistic regression model to estimate the probability of each pixel containing trees. The probability images were classified into a binary map of tree cover using local thresholds, and then visually edited to reduce errors. The final tree map was then attributed with the mean FPC value from the multi-temporal imagery. Validation of the binary map based on visually assessed high resolution reference imagery revealed an overall accuracy of 88% (±0.51% standard error, while comparison against airborne lidar derived data also resulted in an overall accuracy of 88%. A preliminary assessment of the FPC map by comparing against 76 field measurements showed a very good agreement (r2 = 0.90 with a root mean square error of 8.57%, although this may not be representative due to the opportunistic sampling design. The map represents a regionally consistent and locally relevant record of tree cover for NSW, and

  11. Multi-temporal Soil Erosion Modelling over the Mt Kenya Region with Multi-Sensor Earth Observation Data

    Science.gov (United States)

    Symeonakis, Elias; Higginbottom, Thomas

    2015-04-01

    Accelerated soil erosion is the principal cause of soil degradation across the world. In Africa, it is seen as a serious problem creating negative impacts on agricultural production, infrastructure and water quality. Regarding the Mt Kenya region, specifically, soil erosion is a serious threat mainly due to unplanned and unsustainable practices linked to tourism, agriculture and rapid population growth. The soil types roughly correspond with different altitudinal zones and are generally very fertile due to their volcanic origin. Some of them have been created by eroding glaciers while others are due to millions of years of fluvial erosion. The soils on the mountain are easily eroded once exposed: when vegetation is removed, the soil quickly erodes down to bedrock by either animals or humans, as tourists erode paths and local people clear large swaths of forested land for agriculture, mostly illegally. It is imperative, therefore, that a soil erosion monitoring system for the Mt Kenya region is in place in order to understand the magnitude of, and be able to respond to, the increasing number of demands on this renewable resource. In this paper, we employ a simple regional-scale soil erosion modelling framework based on the Thornes model and suggest an operational methodology for quantifying and monitoring water runoff and soil erosion using multi-sensor and multi-temporal remote sensing data in a GIS framework. We compare the estimates of this study with general data on the severity of soil erosion over Kenya and with measured rates of soil loss at different locations over the area of study. The results show that the measured and estimated rates of erosion are generally similar and within the same order of magnitude. They also show that, over the last years, erosion rates are increasing in large parts of the region at an alarming rate, and that mitigation measures are needed to reverse the negative effects of uncontrolled socio-economic practices.

  12. Multi Scale Multi Temporal Near Real Time Approach for Volcanic Eruptions monitoring, Test Case: Mt Etna eruption 2017

    Science.gov (United States)

    Buongiorno, M. F.; Silvestri, M.; Musacchio, M.

    2017-12-01

    In this work a complete processing chain from the detection of the beginning of eruption to the estimation of lava flow temperature on active volcanoes using remote sensing data is presented showing the results for the Mt. Etna eruption on March 2017. The early detection of new eruption is based on the potentiality ensured by geostationary very low spatial resolution satellite (3x3 km in nadiral view), the hot spot/lava flow evolution is derived by S2 polar medium/high spatial resolution (20x20 mt) while the surface temperature is estimated by polar medium/low spatial resolution such as L8, ASTER and S3 (from 90 mt up to 1km).This approach merges two outcome derived by activity performed for monitoring purposes within INGV R&D activities and the results obtained by Geohazards Exploitation Platform ESA funded project (GEP) aimed to the development of shared platform for providing services based on EO data. Because the variety of phenomena to be analyzed a multi temporal multi scale approach has been used to implement suitable and robust algorithms for the different sensors. With the exception of Sentinel 2 (MSI) data, for which the algorithm used is based on NIR-SWIR bands, we exploit the MIR-TIR channels of L8, ASTER, S3 and SEVIRI for generating automatically the surface thermal state analysis. The developed procedure produces time series data and allows to extract information from each single co-registered pixel, to highlight variation of temperatures within specific areas. The final goal is to implement an easy tool which enables scientists and users to extract valuable information from satellite time series at different scales produced by ESA and EUMETSAT in the frame of Europe's Copernicus program and other Earth observation satellites programs such as LANDSAT (USGS) and GOES (NOAA).

  13. Shoreline change assessment using multi-temporal satellite images: a case study of Lake Sapanca, NW Turkey.

    Science.gov (United States)

    Duru, Umit

    2017-08-01

    The research summarized here determines historical shoreline changes along Lake Sapanca by using Remote Sensing (RS) and Geographical Information Systems (GIS). Six multi-temporal satellite images of Landsat Multispectral Scanner (L1-5 MMS), Enhanced Thematic Mapper Plus (L7 ETM+), and Operational Land Imager Sensors (L8 OLI), covering the period between 17 June 1975 and 15 July 2016, were used to monitor shoreline positions and estimate change rates along the coastal zone. After pre-possessing routines, the Normalized Difference Water Index (NDWI), Modified Normalized Difference Water Index (MNDWI), and supervised classification techniques were utilized to extract six different shorelines. Digital Shoreline Analysis System (DSAS), a toolbox that enables transect-based computations of shoreline displacement, was used to compute historical shoreline change rates. The average rate of shoreline change for the entire cost was 2.7 m/year of progradation with an uncertainty of 0.2 m/year. While the great part of the lake shoreline remained stable, the study concluded that the easterly and westerly coasts and deltaic coasts are more vulnerable to shoreline displacements over the last four decades. The study also reveals that anthropogenic activities, more specifically over extraction of freshwater from the lake, cyclic variation in rainfall, and deposition of sediment transported by the surrounding creeks dominantly control spatiotemporal shoreline changes in the region. Monitoring shoreline changes using multi-temporal satellite images is a significant component for the coastal decision-making and management.

  14. Fatigue Damage Estimation and Data-based Control for Wind Turbines

    DEFF Research Database (Denmark)

    Barradas Berglind, Jose de Jesus; Wisniewski, Rafal; Soltani, Mohsen

    2015-01-01

    based on hysteresis operators, which can be used in control loops. The authors propose a data-based model predictive control (MPC) strategy that incorporates an online fatigue estimation method through the objective function, where the ultimate goal in mind is to reduce the fatigue damage of the wind......The focus of this work is on fatigue estimation and data-based controller design for wind turbines. The main purpose is to include a model of the fatigue damage of the wind turbine components in the controller design and synthesis process. This study addresses an online fatigue estimation method...... turbine components. The outcome is an adaptive or self-tuning MPC strategy for wind turbine fatigue damage reduction, which relies on parameter identification on previous measurement data. The results of the proposed strategy are compared with a baseline model predictive controller....

  15. Estimating spatio-temporal dynamics of size-structured populations

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Thygesen, Uffe Høgsbro; Andersen, Ken Haste

    2014-01-01

    with simple stock dynamics, to estimate simultaneously how size distributions and spatial distributions develop in time. We demonstrate the method for a cod population sampled by trawl surveys. Particular attention is paid to correlation between size classes within each trawl haul due to clustering...... of individuals with similar size. The model estimates growth, mortality and reproduction, after which any aspect of size-structure, spatio-temporal population dynamics, as well as the sampling process can be probed. This is illustrated by two applications: 1) tracking the spatial movements of a single cohort...

  16. A New Hybrid Spatio-temporal Model for Estimating Daily Multi-year PM2.5 Concentrations Across Northeastern USA Using High Resolution Aerosol Optical Depth Data

    Science.gov (United States)

    Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel

    2014-01-01

    The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter PM(sub 2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data.We developed and cross validated models to predict daily PM(sub 2.5) at a 1X 1 km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1 X 1 km grid predictions. We used mixed models regressing PM(sub 2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R(sup 2) = 0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R(sup 2) = 0.87, R(sup)2 = 0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions

  17. Quantifying auditory temporal stability in a large database of recorded music.

    Science.gov (United States)

    Ellis, Robert J; Duan, Zhiyan; Wang, Ye

    2014-01-01

    "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature"), none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS)--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  18. Predictive 3D search algorithm for multi-frame motion estimation

    NARCIS (Netherlands)

    Lim, Hong Yin; Kassim, A.A.; With, de P.H.N.

    2008-01-01

    Multi-frame motion estimation introduced in recent video standards such as H.264/AVC, helps to improve the rate-distortion performance and hence the video quality. This, however, comes at the expense of having a much higher computational complexity. In multi-frame motion estimation, there exists

  19. Improving spatio-temporal model estimation of satellite-derived PM2.5 concentrations: Implications for public health

    Science.gov (United States)

    Barik, M. G.; Al-Hamdan, M. Z.; Crosson, W. L.; Yang, C. A.; Coffield, S. R.

    2017-12-01

    Satellite-derived environmental data, available in a range of spatio-temporal scales, are contributing to the growing use of health impact assessments of air pollution in the public health sector. Models developed using correlation of Moderate Resolution Imaging Spectrometer (MODIS) Aerosol Optical Depth (AOD) with ground measurements of fine particulate matter less than 2.5 microns (PM2.5) are widely applied to measure PM2.5 spatial and temporal variability. In the public health sector, associations of PM2.5 with respiratory and cardiovascular diseases are often investigated to quantify air quality impacts on these health concerns. In order to improve predictability of PM2.5 estimation using correlation models, we have included meteorological variables, higher-resolution AOD products and instantaneous PM2.5 observations into statistical estimation models. Our results showed that incorporation of high-resolution (1-km) Multi-Angle Implementation of Atmospheric Correction (MAIAC)-generated MODIS AOD, meteorological variables and instantaneous PM2.5 observations improved model performance in various parts of California (CA), USA, where single variable AOD-based models showed relatively weak performance. In this study, we further asked whether these improved models actually would be more successful for exploring associations of public health outcomes with estimated PM2.5. To answer this question, we geospatially investigated model-estimated PM2.5's relationship with respiratory and cardiovascular diseases such as asthma, high blood pressure, coronary heart disease, heart attack and stroke in CA using health data from the Centers for Disease Control and Prevention (CDC)'s Wide-ranging Online Data for Epidemiologic Research (WONDER) and the Behavioral Risk Factor Surveillance System (BRFSS). PM2.5 estimation from these improved models have the potential to improve our understanding of associations between public health concerns and air quality.

  20. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  1. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  2. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  3. SU-E-T-544: A Radiation Oncology-Specific Multi-Institutional Federated Database: Initial Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, K; Phillips, M; Fishburn, M; Evans, K; Banerian, S; Mayr, N [University of Washington, Seattle, WA (United States); Wong, J; McNutt, T; Moore, J; Robertson, S [Johns Hopkins University, Baltimore, MD (United States)

    2014-06-01

    Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinical data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices

  4. SU-E-T-544: A Radiation Oncology-Specific Multi-Institutional Federated Database: Initial Implementation

    International Nuclear Information System (INIS)

    Hendrickson, K; Phillips, M; Fishburn, M; Evans, K; Banerian, S; Mayr, N; Wong, J; McNutt, T; Moore, J; Robertson, S

    2014-01-01

    Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinical data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices

  5. Development of a virtual private database for a multi-institutional internet-based radiation oncology database overcoming differences in protocols

    International Nuclear Information System (INIS)

    Harauchi, Hajime; Kondo, Takashi; Kumasaki, Yu

    2002-01-01

    A multi-institutional Radiation Oncology Greater Area Database (ROGAD) was started in 1991 under the direction of the Japanese Society for Therapeutic Radiology and Oncology (JASTRO). Use of ROGAD was intended to allow reflection of results of data analysis into treatment strategy and treatment planning for individual cases, to provide quality assurance, to maximize the efficacy of radiotherapy, to allow assessment of new technologies or new modalities, and to optimize medical decision making. ROGAD collected 13,448 radiotherapy treatment cases from 325 facilities during the period from 1992 to 2001. In 2000, questionnaires were sent to 725 radiotherapy facilities throughout Japan, to further obtain the situation of the radiation oncology database. Workers at 179 facilities replied that ''the protocol of my facility is different from ROGAD protocol and I must send data according to the ROGAD protocol''. So, we developed the Virtual Private Database System (VPDS) which is operated as if an oncologist had a database solely owned by his own facility, in spite of actually operating ROGAD. VPDS realizes integration of different plural databases, regardless of differences in entry methods, protocols, definitions and interpretations of contents of clinical data elements between facilities. (author)

  6. Thermodynamic database of multi-component Mg alloys and its application to solidification and heat treatment

    Directory of Open Access Journals (Sweden)

    Guanglong Xu

    2016-12-01

    Full Text Available An overview about one thermodynamic database of multi-component Mg alloys is given in this work. This thermodynamic database includes thermodynamic descriptions for 145 binary systems and 48 ternary systems in 23-component (Mg–Ag–Al–Ca–Ce–Cu–Fe–Gd–K–La–Li–Mn–Na–Nd–Ni–Pr–Si–Sn–Sr–Th–Y–Zn–Zr system. First, the major computational and experimental tools to establish the thermodynamic database of Mg alloys are briefly described. Subsequently, among the investigated binary and ternary systems, representative binary and ternary systems are shown to demonstrate the major feature of the database. Finally, application of the thermodynamic database to solidification simulation and selection of heat treatment schedule is described.

  7. Geomorphological change detection using object-based feature extraction from multi-temporal LIDAR data

    NARCIS (Netherlands)

    Seijmonsbergen, A.C.; Anders, N.S.; Bouten, W.; Feitosa, R.Q.; da Costa, G.A.O.P.; de Almeida, C.M.; Fonseca, L.M.G.; Kux, H.J.H.

    2012-01-01

    Multi-temporal LiDAR DTMs are used for the development and testing of a method for geomorphological change analysis in western Austria. Our test area is located on a mountain slope in the Gargellen Valley in western Austria. Six geomorphological features were mapped by using stratified Object-Based

  8. Embedded Vehicle Speed Estimation System Using an Asynchronous Temporal Contrast Vision Sensor

    Directory of Open Access Journals (Sweden)

    D. Bauer

    2007-01-01

    Full Text Available This article presents an embedded multilane traffic data acquisition system based on an asynchronous temporal contrast vision sensor, and algorithms for vehicle speed estimation developed to make efficient use of the asynchronous high-precision timing information delivered by this sensor. The vision sensor features high temporal resolution with a latency of less than 100 μs, wide dynamic range of 120 dB of illumination, and zero-redundancy, asynchronous data output. For data collection, processing and interfacing, a low-cost digital signal processor is used. The speed of the detected vehicles is calculated from the vision sensor's asynchronous temporal contrast event data. We present three different algorithms for velocity estimation and evaluate their accuracy by means of calibrated reference measurements. The error of the speed estimation of all algorithms is near zero mean and has a standard deviation better than 3% for both traffic flow directions. The results and the accuracy limitations as well as the combined use of the algorithms in the system are discussed.

  9. Segmentation of myocardial perfusion MR sequences with multi-band Active Appearance Models driven by spatial and temporal features

    NARCIS (Netherlands)

    Baka, N.; Milles, J.; Hendriks, E.A.; Suinesiaputra, A.; Jerosh Herold, M.; Reiber, J.H.C.; Lelieveldt, B.P.F.

    2008-01-01

    This work investigates knowledge driven segmentation of cardiac MR perfusion sequences. We build upon previous work on multi-band AAMs to integrate into the segmentation both spatial priors about myocardial shape as well as temporal priors about characteristic perfusion patterns. Different temporal

  10. Multi-annual forward estimate - gas 2017

    International Nuclear Information System (INIS)

    2017-01-01

    GRDF, GRTgaz, SPEGNN and TIGF gas transport and distribution operators have the responsibility to publish a reference document about the multi-annual forward estimate of gas consumption evolution and renewable gas production. This document is the second joint forecast report published by the 4 French gas operators. It presents, first, the situation, hypotheses, analysis and perspectives of the 4 main sectoral gas markets (residential, tertiary, industrial, mobility), then, the centralized power generation and cogeneration, next, the production of renewable gas (different sectors, hypotheses, analysis and perspectives), and finally, a multi-sectorial vision according to 3 different scenarios

  11. Multi-annual forward estimate - gas 2016

    International Nuclear Information System (INIS)

    2016-01-01

    GRDF, GRTgaz, SPEGNN and TIGF gas transport and distribution operators have the responsibility to publish a reference document about the multi-annual forward estimate of gas consumption evolution and renewable gas production. This document is the first joint forecast report published by the 4 French gas operators. It presents, first, the situation, hypotheses, analysis and perspectives of the 4 main sectoral gas markets (residential, tertiary, industrial, mobility), then, the centralized power generation and cogeneration, next, the production of renewable gas (different sectors, hypotheses, analysis and perspectives), and finally, a multi-sectorial vision according to 3 different scenarios

  12. Shoreline change after 12 years of tsunami in Banda Aceh, Indonesia: a multi-resolution, multi-temporal satellite data and GIS approach

    Science.gov (United States)

    Sugianto, S.; Heriansyah; Darusman; Rusdi, M.; Karim, A.

    2018-04-01

    The Indian Ocean Tsunami event on the 26 December 2004 has caused severe damage of some shorelines in Banda Aceh City, Indonesia. Tracing back the impact can be seen using remote sensing data combined with GIS. The approach is incorporated with image processing to analyze the extent of shoreline changes with multi-temporal data after 12 years of tsunami. This study demonstrates multi-resolution and multi-temporal satellite images of QuickBird and IKONOS to demarcate the shoreline of Banda Aceh shoreline from before and after tsunami. The research has demonstrated a significant change to the shoreline in the form of abrasion between 2004 and 2005 from few meters to hundred meters’ change. The change between 2004 and 2011 has not returned to the previous stage of shoreline before the tsunami, considered post tsunami impact. The abrasion occurs between 18.3 to 194.93 meters. Further, the change in 2009-2011 shows slowly change of shoreline of Banda Aceh, considered without impact of tsunami e.g. abrasion caused by ocean waves that erode the coast and on specific areas accretion occurs caused by sediment carried by the river flow into the sea near the shoreline of the study area.

  13. Data Rods: High Speed, Time-Series Analysis of Massive Cryospheric Data Sets Using Object-Oriented Database Methods

    Science.gov (United States)

    Liang, Y.; Gallaher, D. W.; Grant, G.; Lv, Q.

    2011-12-01

    Change over time, is the central driver of climate change detection. The goal is to diagnose the underlying causes, and make projections into the future. In an effort to optimize this process we have developed the Data Rod model, an object-oriented approach that provides the ability to query grid cell changes and their relationships to neighboring grid cells through time. The time series data is organized in time-centric structures called "data rods." A single data rod can be pictured as the multi-spectral data history at one grid cell: a vertical column of data through time. This resolves the long-standing problem of managing time-series data and opens new possibilities for temporal data analysis. This structure enables rapid time- centric analysis at any grid cell across multiple sensors and satellite platforms. Collections of data rods can be spatially and temporally filtered, statistically analyzed, and aggregated for use with pattern matching algorithms. Likewise, individual image pixels can be extracted to generate multi-spectral imagery at any spatial and temporal location. The Data Rods project has created a series of prototype databases to store and analyze massive datasets containing multi-modality remote sensing data. Using object-oriented technology, this method overcomes the operational limitations of traditional relational databases. To demonstrate the speed and efficiency of time-centric analysis using the Data Rods model, we have developed a sea ice detection algorithm. This application determines the concentration of sea ice in a small spatial region across a long temporal window. If performed using traditional analytical techniques, this task would typically require extensive data downloads and spatial filtering. Using Data Rods databases, the exact spatio-temporal data set is immediately available No extraneous data is downloaded, and all selected data querying occurs transparently on the server side. Moreover, fundamental statistical

  14. On the expected value and variance for an estimator of the spatio-temporal product density function

    DEFF Research Database (Denmark)

    Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge

    Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...

  15. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Manoela Ojeda

    2014-01-01

    Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

  16. Estimating Traveler Populations at Airport and Cruise Terminals for Population Distribution and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Jochem, Warren C [ORNL; Sims, Kelly M [ORNL; Bright, Eddie A [ORNL; Urban, Marie L [ORNL; Rose, Amy N [ORNL; Coleman, Phil R [ORNL; Bhaduri, Budhendra L [ORNL

    2013-01-01

    In recent years, uses of high-resolution population distribution databases are increasing steadily for environmental, socioeconomic, public health, and disaster-related research and operations. With the development of daytime population distribution, temporal resolution of such databases has been improved. However, the lack of incorporation of transitional population, namely business and leisure travelers, leaves a significant population unaccounted for within the critical infrastructure networks, such as at transportation hubs. This paper presents two general methodologies for estimating passenger populations in airport and cruise port terminals at a high temporal resolution which can be incorporated into existing population distribution models. The methodologies are geographically scalable and are based on, and demonstrate how, two different transportation hubs with disparate temporal population dynamics can be modeled utilizing publicly available databases including novel data sources of flight activity from the Internet which are updated in near-real time. The airport population estimation model shows great potential for rapid implementation for a large collection of airports on a national scale, and the results suggest reasonable accuracy in the estimated passenger traffic. By incorporating population dynamics at high temporal resolutions into population distribution models, we hope to improve the estimates of populations exposed to or at risk to disasters, thereby improving emergency planning and response, and leading to more informed policy decisions.

  17. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    Science.gov (United States)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and

  18. Digital baseline estimation method for multi-channel pulse height analyzing

    International Nuclear Information System (INIS)

    Xiao Wuyun; Wei Yixiang; Ai Xianyun

    2005-01-01

    The basic features of digital baseline estimation for multi-channel pulse height analysis are introduced. The weight-function of minimum-noise baseline filter is deduced with functional variational calculus. The frequency response of this filter is also deduced with Fourier transformation, and the influence of parameters on amplitude frequency response characteristics is discussed. With MATLAB software, the noise voltage signal from the charge sensitive preamplifier is simulated, and the processing effect of minimum-noise digital baseline estimation is verified. According to the results of this research, digital baseline estimation method can estimate baseline optimally, and it is very suitable to be used in digital multi-channel pulse height analysis. (authors)

  19. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  20. Estimating uncertainty and its temporal variation related to global climate models in quantifying climate change impacts on hydrology

    Science.gov (United States)

    Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua

    2018-01-01

    Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance

  1. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  2. Applying Stochastic Metaheuristics to the Problem of Data Management in a Multi-Tenant Database Cluster

    Directory of Open Access Journals (Sweden)

    E. A. Boytsov

    2014-01-01

    Full Text Available A multi-tenant database cluster is a concept of a data-storage subsystem for cloud applications with the multi-tenant architecture. The cluster is a set of relational database servers with the single entry point, combined into one unit with a cluster controller. This system is aimed to be used by applications developed according to Software as a Service (SaaS paradigm and allows to place tenants at database servers so that providing their isolation, data backup and the most effective usage of available computational power. One of the most important problems about such a system is an effective distribution of data into servers, which affects the degree of individual cluster nodes load and faulttolerance. This paper considers the data-management approach, based on the usage of a load-balancing quality measure function. This function is used during initial placement of new tenants and also during placement optimization steps. Standard schemes of metaheuristic optimization such as simulated annealing and tabu search are used to find a better tenant placement.

  3. Development of radiation oncology learning system combined with multi-institutional radiotherapy database (ROGAD)

    International Nuclear Information System (INIS)

    Takemura, Akihiro; Iinuma, Masahiro; Kou, Hiroko; Harauchi, Hajime; Inamura, Kiyonari

    1999-01-01

    We have constructed and are operating a multi-institutional radiotherapy database ROGAD (Radiation Oncology Greater Area Database) since 1992. One of it's purpose is 'to optimize individual radiotherapy plans'. We developed Radiation oncology learning system combined with ROGAD' which conforms to that purpose. Several medical doctors evaluated our system. According to those evaluations, we are now confident that our system is able to contribute to improvement of radiotherapy results. Our final target is to generate a good cyclic relationship among three components: radiotherapy results according to ''Radiation oncology learning system combined with ROGAD.'; The growth of ROGAD; and radiation oncology learning system. (author)

  4. Food composition database development for between country comparisons

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2006-01-01

    Full Text Available Abstract Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.

  5. Food composition database development for between country comparisons.

    Science.gov (United States)

    Merchant, Anwar T; Dehghan, Mahshid

    2006-01-19

    Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.

  6. Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System

    Directory of Open Access Journals (Sweden)

    Cheng Wang

    2016-12-01

    Full Text Available Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation.

  7. Estimation of Paddy Rice Variables with a Modified Water Cloud Model and Improved Polarimetric Decomposition Using Multi-Temporal RADARSAT-2 Images

    Directory of Open Access Journals (Sweden)

    Zhi Yang

    2016-10-01

    Full Text Available Rice growth monitoring is very important as rice is one of the staple crops of the world. Rice variables as quantitative indicators of rice growth are critical for farming management and yield estimation, and synthetic aperture radar (SAR has great advantages for monitoring rice variables due to its all-weather observation capability. In this study, eight temporal RADARSAT-2 full-polarimetric SAR images were acquired during rice growth cycle and a modified water cloud model (MWCM was proposed, in which the heterogeneity of the rice canopy in the horizontal direction and its phenological changes were considered when the double-bounce scattering between the rice canopy and the underlying surface was firstly considered as well. Then, three scattering components from an improved polarimetric decomposition were coupled with the MWCM, instead of the backscattering coefficients. Using a genetic algorithm, eight rice variables were estimated, such as the leaf area index (LAI, rice height (h, and the fresh and dry biomass of ears (Fe and De. The accuracy validation showed the MWCM was suitable for the estimation of rice variables during the whole growth season. The validation results showed that the MWCM could predict the temporal behaviors of the rice variables well during the growth cycle (R2 > 0.8. Compared with the original water cloud model (WCM, the relative errors of rice variables with the MWCM were much smaller, especially in the vegetation phase (approximately 15% smaller. Finally, it was discussed that the MWCM could be used, theoretically, for extensive applications since the empirical coefficients in the MWCM were determined in general cases, but more applications of the MWCM are necessary in future work.

  8. A dedicated database system for handling multi-level data in systems biology.

    Science.gov (United States)

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  9. Geospatial database of estimates of groundwater discharge to streams in the Upper Colorado River Basin

    Science.gov (United States)

    Garcia, Adriana; Masbruch, Melissa D.; Susong, David D.

    2014-01-01

    The U.S. Geological Survey, as part of the Department of the Interior’s WaterSMART (Sustain and Manage America’s Resources for Tomorrow) initiative, compiled published estimates of groundwater discharge to streams in the Upper Colorado River Basin as a geospatial database. For the purpose of this report, groundwater discharge to streams is the baseflow portion of streamflow that includes contributions of groundwater from various flow paths. Reported estimates of groundwater discharge were assigned as attributes to stream reaches derived from the high-resolution National Hydrography Dataset. A total of 235 estimates of groundwater discharge to streams were compiled and included in the dataset. Feature class attributes of the geospatial database include groundwater discharge (acre-feet per year), method of estimation, citation abbreviation, defined reach, and 8-digit hydrologic unit code(s). Baseflow index (BFI) estimates of groundwater discharge were calculated using an existing streamflow characteristics dataset and were included as an attribute in the geospatial database. A comparison of the BFI estimates to the compiled estimates of groundwater discharge found that the BFI estimates were greater than the reported groundwater discharge estimates.

  10. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  11. Application of Deep Learning of Multi-Temporal SENTINEL-1 Images for the Classification of Coastal Vegetation Zone of the Danube Delta

    Science.gov (United States)

    Niculescu, S.; Ienco, D.; Hanganu, J.

    2018-04-01

    Land cover is a fundamental variable for regional planning, as well as for the study and understanding of the environment. This work propose a multi-temporal approach relying on a fusion of radar multi-sensor data and information collected by the latest sensor (Sentinel-1) with a view to obtaining better results than traditional image processing techniques. The Danube Delta is the site for this work. The spatial approach relies on new spatial analysis technologies and methodologies: Deep Learning of multi-temporal Sentinel-1. We propose a deep learning network for image classification which exploits the multi-temporal characteristic of Sentinel-1 data. The model we employ is a Gated Recurrent Unit (GRU) Network, a recurrent neural network that explicitly takes into account the time dimension via a gated mechanism to perform the final prediction. The main quality of the GRU network is its ability to consider only the important part of the information coming from the temporal data discarding the irrelevant information via a forgetting mechanism. We propose to use such network structure to classify a series of images Sentinel-1 (20 Sentinel-1 images acquired between 9.10.2014 and 01.04.2016). The results are compared with results of the classification of Random Forest.

  12. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  13. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  14. A dedicated database system for handling multi-level data in systems biology

    OpenAIRE

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Background Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging...

  15. Multi-Temporal Satellite Imagery for Urban Expansion Assessment at Sharjah City /UAE

    International Nuclear Information System (INIS)

    Al-Ruzouq, R; Shanableh, A

    2014-01-01

    Change detection is the process of identifying differences in land cover over time. As human and natural forces continue to alter the landscape, it is important to develop monitoring methods to assess and quantify these changes. Recent advances in satellite imagery, in terms of improved spatial and temporal resolutions, are allowing for efficient identification of change patterns and the prediction of areas of growth. Sharjah is the third largest and most populous city in the United Arab Emirates (UAE). It is located along the northern coast of the Persian Gulf on the Arabian Peninsula. After the discovery of oil and its export in the last four decades at UAE, it has experienced very rapid growth in industry, economy and population. The main purpose of this study is to detect urban development in Sharjah city by detecting and registering linear features in multi-temporal Landsat images. This paper used linear features for image registration that were chosen since they can be reliably extracted from imagery with significantly different geometric and radiometric properties. Derived edges from the registered images are used as the basis for change detection. Image registration and pixel-pixel subtraction has been implement using multi- temporal Landsat images for Sharjah City. Straight-line segments have been used for accurate co-registration as well as main element for a reliable change detection procedure. Results illustrate that highest range of growth that represented by linear features (building and roads) have been accrued during 1976 – 1987 and stand for 36.24% of the total urban features inside Sharjah city. Moreover, result shows that since 1976 to 2010, the cumulative urban expansion inside Sharjah city is 71.9%

  16. Development of radiation oncology learning system combined with multi-institutional radiotherapy database (ROGAD)

    Energy Technology Data Exchange (ETDEWEB)

    Takemura, Akihiro; Iinuma, Masahiro; Kou, Hiroko [Kanazawa Univ. (Japan). School of Medicine; Harauchi, Hajime; Inamura, Kiyonari

    1999-09-01

    We have constructed and are operating a multi-institutional radiotherapy database ROGAD (Radiation Oncology Greater Area Database) since 1992. One of it's purpose is 'to optimize individual radiotherapy plans'. We developed Radiation oncology learning system combined with ROGAD' which conforms to that purpose. Several medical doctors evaluated our system. According to those evaluations, we are now confident that our system is able to contribute to improvement of radiotherapy results. Our final target is to generate a good cyclic relationship among three components: radiotherapy results according to ''Radiation oncology learning system combined with ROGAD.'; The growth of ROGAD; and radiation oncology learning system. (author)

  17. Comparison of Various Databases for Estimation of Dietary Polyphenol Intake in the Population of Polish Adults

    Directory of Open Access Journals (Sweden)

    Anna M. Witkowska

    2015-11-01

    Full Text Available The primary aim of the study was to estimate the consumption of polyphenols in a population of 6661 subjects aged between 20 and 74 years representing a cross-section of the Polish society, and the second objective was to compare the intakes of flavonoids calculated on the basis of the two commonly used databases. Daily food consumption data were collected in 2003–2005 using a single 24-hour dietary recall. Intake of total polyphenols was estimated using an online Phenol-Explorer database, and flavonoid intake was determined using following data sources: the United States Department of Agriculture (USDA database combined of flavonoid and isoflavone databases, and the Phenol-Explorer database. Total polyphenol intake, which was calculated with the Phenol-Explorer database, was 989 mg/day with the major contributions of phenolic acids 556 mg/day and flavonoids 403.5 mg/day. The flavonoid intake calculated on the basis of the USDA databases was 525 mg/day. This study found that tea is the primary source of polyphenols and flavonoids for the studied population, including mainly flavanols, while coffee is the most important contributor of phenolic acids, mostly hydroxycinnamic acids. Our study also demonstrated that flavonoid intakes estimated according to various databases may substantially differ. Further work should be undertaken to expand polyphenol databases to better reflect their food contents.

  18. Backscatter Analysis Using Multi-Temporal SENTINEL-1 SAR Data for Crop Growth of Maize in Konya Basin, Turkey

    Science.gov (United States)

    Abdikan, S.; Sekertekin, A.; Ustunern, M.; Balik Sanli, F.; Nasirzadehdizaji, R.

    2018-04-01

    Temporal monitoring of crop types is essential for the sustainable management of agricultural activities on both national and global levels. As a practical and efficient tool, remote sensing is widely used in such applications. In this study, Sentinel-1 Synthetic Aperture Radar (SAR) imagery was utilized to investigate the performance of the sensor backscatter image on crop monitoring. Multi-temporal C-band VV and VH polarized SAR images were acquired simultaneously by in-situ measurements which was conducted at Konya basin, central Anatolia Turkey. During the measurements, plant height of maize plant was collected and relationship between backscatter values and plant height was analysed. The maize growth development was described under Biologische Bundesanstalt, bundessortenamt und CHemische industrie (BBCH). Under BBCH stages, the test site was classified as leaf development, stem elongation, heading and flowering in general. The correlation coefficient values indicated high correlation for both polarimetry during the early stages of the plant, while late stages indicated lower values in both polarimetry. As a last step, multi-temporal coverage of crop fields was analysed to map seasonal land use. To this aim, object based image classification was applied following image segmentation. About 80 % accuracies of land use maps were created in this experiment. As preliminary results, it is concluded that Sentinel-1 data provides beneficial information about plant growth. Dual-polarized Sentinel-1 data has high potential for multi-temporal analyses for agriculture monitoring and reliable mapping.

  19. Multi-temporal RADARSAT-1 and ERS backscattering signatures of coastal wetlands in southeastern Louisiana

    Science.gov (United States)

    Kwoun, Oh-Ig; Lu, Z.

    2009-01-01

    Using multi-temporal European Remote-sensing Satellites (ERS-1/-2) and Canadian Radar Satellite (RADARSAT-1) synthetic aperture radar (SAR) data over the Louisiana coastal zone, we characterize seasonal variations of radar backscat-tering according to vegetation type. Our main findings are as follows. First, ERS-1/-2 and RADARSAT-1 require careful radiometric calibration to perform multi-temporal backscattering analysis for wetland mapping. We use SAR backscattering signals from cities for the relative calibration. Second, using seasonally averaged backscattering coefficients from ERS-1/-2 and RADARSAT-1, we can differentiate most forests (bottomland and swamp forests) and marshes (freshwater, intermediate, brackish, and saline marshes) in coastal wetlands. The student t-test results support the usefulness of season-averaged backscatter data for classification. Third, combining SAR backscattering coefficients and an optical-sensor-based normalized difference vegetation index can provide further insight into vegetation type and enhance the separation between forests and marshes. Our study demonstrates that SAR can provide necessary information to characterize coastal wetlands and monitor their changes.

  20. Clinical application of entire gastrointestinal barium meal combined with multi-temporal abdominal films in patients with intestinal neuronal dysplasia type B.

    Science.gov (United States)

    Li, Xiao-Yun; Li, Xiao-Gang; Du, Ming-Guo; Chen, Zhi-Dan; Tao, Zheng-Gui; Liao, Xiao-Feng

    2013-01-01

    To report and evaluate the application of entire gastrointestinal barium meal combined with multi-temporal abdominal films in the diagnosis of patients with intestinal neuronal dysplasia type B (IND type B). Thirty-six patients with symptoms of long-standing constipation were enrolled in this study. The study took place at the Department of General Surgery, Xiangyang Central Hospital, Hubei Province, China from July 2007 to October 2012. All of them had already been subjected to the tests of barium enema and anorectal manometry and were suspected to be IND type B, but were not confirmed by mucous membrane acetylcholinesterase determination. All underwent the entire gastrointestinal barium meal combined with multi-temporal abdominal films. The data was collected and then analyzed retrospectively. After entire gastrointestinal barium meal combined with multi-temporal abdominal films, 30 out of 36 cases in this group were diagnosed with intestinal neuronal diseases, and then were treated with appropriate surgical treatment. The postoperative pathological diagnosis was IND type B. The other 6 patients in this group still could not be diagnosed explicitly after the test; thus, we treated them with conservative treatment. Entire gastrointestinal barium meal combined with multi-temporal abdominal films has the advantage of being able to test the gastrointestinal transfer capabilities and to find physiological and pathological changes simultaneously. It could provide important proof for the diagnosis of patients with intestinal neuronal dysplasia type B.

  1. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    Science.gov (United States)

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  2. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  3. Evaluation of TRMM Multi-satellite Precipitation Analysis (TMPA performance in the Central Andes region and its dependency on spatial and temporal resolution

    Directory of Open Access Journals (Sweden)

    M. L. M. Scheel

    2011-08-01

    Full Text Available Climate time series are of major importance for base line studies for climate change impact and adaptation projects. However, for instance, in mountain regions and in developing countries there exist significant gaps in ground based climate records in space and time. Specifically, in the Peruvian Andes spatially and temporally coherent precipitation information is a prerequisite for ongoing climate change adaptation projects in the fields of water resources, disasters and food security. The present work aims at evaluating the ability of Tropical Rainfall Measurement Mission (TRMM Multi-satellite Precipitation Analysis (TMPA to estimate precipitation rates at daily 0.25° × 0.25° scale in the Central Andes and the dependency of the estimate performance on changing spatial and temporal resolution. Comparison of the TMPA product with gauge measurements in the regions of Cuzco, Peru and La Paz, Bolivia were carried out and analysed statistically. Large biases are identified in both investigation areas in the estimation of daily precipitation amounts. The occurrence of strong precipitation events was well assessed, but their intensities were underestimated. TMPA estimates for La Paz show high false alarm ratio.

    The dependency of the TMPA estimate quality with changing resolution was analysed by comparisons of 1-, 7-, 15- and 30-day sums for Cuzco, Peru. The correlation of TMPA estimates with ground data increases strongly and almost linearly with temporal aggregation. The spatial aggregation to 0.5°, 0.75° and 1° grid box averaged precipitation and its comparison to gauge data of the same areas revealed no significant change in correlation coefficients and estimate performance.

    In order to profit from the TMPA combination product on a daily basis, a procedure to blend it with daily precipitation gauge measurements is proposed.

    Different sources of errors and uncertainties introduced by the sensors, sensor

  4. Tropical land use land cover mapping in Pará (Brazil) using discriminative Markov random fields and multi-temporal TerraSAR-X data

    Science.gov (United States)

    Hagensieker, Ron; Roscher, Ribana; Rosentreter, Johannes; Jakimow, Benjamin; Waske, Björn

    2017-12-01

    Remote sensing satellite data offer the unique possibility to map land use land cover transformations by providing spatially explicit information. However, detection of short-term processes and land use patterns of high spatial-temporal variability is a challenging task. We present a novel framework using multi-temporal TerraSAR-X data and machine learning techniques, namely discriminative Markov random fields with spatio-temporal priors, and import vector machines, in order to advance the mapping of land cover characterized by short-term changes. Our study region covers a current deforestation frontier in the Brazilian state Pará with land cover dominated by primary forests, different types of pasture land and secondary vegetation, and land use dominated by short-term processes such as slash-and-burn activities. The data set comprises multi-temporal TerraSAR-X imagery acquired over the course of the 2014 dry season, as well as optical data (RapidEye, Landsat) for reference. Results show that land use land cover is reliably mapped, resulting in spatially adjusted overall accuracies of up to 79% in a five class setting, yet limitations for the differentiation of different pasture types remain. The proposed method is applicable on multi-temporal data sets, and constitutes a feasible approach to map land use land cover in regions that are affected by high-frequent temporal changes.

  5. The MIND PALACE: A Multi-Spectral Imaging and Spectroscopy Database for Planetary Science

    Science.gov (United States)

    Eshelman, E.; Doloboff, I.; Hara, E. K.; Uckert, K.; Sapers, H. M.; Abbey, W.; Beegle, L. W.; Bhartia, R.

    2017-12-01

    The Multi-Instrument Database (MIND) is the web-based home to a well-characterized set of analytical data collected by a suite of deep-UV fluorescence/Raman instruments built at the Jet Propulsion Laboratory (JPL). Samples derive from a growing body of planetary surface analogs, mineral and microbial standards, meteorites, spacecraft materials, and other astrobiologically relevant materials. In addition to deep-UV spectroscopy, datasets stored in MIND are obtained from a variety of analytical techniques obtained over multiple spatial and spectral scales including electron microscopy, optical microscopy, infrared spectroscopy, X-ray fluorescence, and direct fluorescence imaging. Multivariate statistical analysis techniques, primarily Principal Component Analysis (PCA), are used to guide interpretation of these large multi-analytical spectral datasets. Spatial co-referencing of integrated spectral/visual maps is performed using QGIS (geographic information system software). Georeferencing techniques transform individual instrument data maps into a layered co-registered data cube for analysis across spectral and spatial scales. The body of data in MIND is intended to serve as a permanent, reliable, and expanding database of deep-UV spectroscopy datasets generated by this unique suite of JPL-based instruments on samples of broad planetary science interest.

  6. The method of separation for evolutionary spectral density estimation of multi-variate and multi-dimensional non-stationary stochastic processes

    KAUST Repository

    Schillinger, Dominik

    2013-07-01

    The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.

  7. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    International Nuclear Information System (INIS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-01-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it 'multi-tier'. The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  8. Predicting Volume and Biomass Change from Multi-Temporal Lidar Sampling and Remeasured Field Inventory Data in Panther Creek Watershed, Oregon, USA

    Directory of Open Access Journals (Sweden)

    Krishna P. Poudel

    2018-01-01

    Full Text Available Using lidar for large-scale forest management can improve operational and management decisions. Using multi-temporal lidar sampling and remeasured field inventory data collected from 78 plots in the Panther Creek Watershed, Oregon, USA, we evaluated the performance of different fixed and mixed models in estimating change in aboveground biomass ( ∆ AGB and cubic volume including top and stump ( ∆ CVTS over a five-year period. Actual values of CVTS and AGB were obtained using newly fitted volume and biomass equations or the equations used by the Pacific Northwest unit of the Forest Inventory and Analysis program. Estimates of change based on fixed and mixed-effect linear models were more accurate than change estimates based on differences in LIDAR-based estimates. This may have been due to the compounding of errors in LIDAR-based estimates over the two time periods. Models used to predict volume and biomass at a given time were, however, more precise than the models used to predict change. Models used to estimate ∆ CVTS were not as accurate as the models employed to estimate ∆ AGB . Final models had cross-validation root mean squared errors as low as 40.90% for ∆ AGB and 54.36% for ∆ CVTS .

  9. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  10. Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2018-01-01

    , exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data......PAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance...... recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate....

  11. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  12. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  13. Data-based depth estimation of an incoming autonomous underwater vehicle.

    Science.gov (United States)

    Yang, T C; Xu, Wen

    2016-10-01

    The data-based method for estimating the depth of a moving source is demonstrated experimentally for an incoming autonomous underwater vehicle traveling toward a vertical line array (VLA) of receivers at constant speed/depth. The method assumes no information on the sound-speed and bottom profile. Performing a wavenumber analysis of a narrowband signal for each hydrophone, the energy of the (modal) spectral peaks as a function of the receiver depth is used to estimate the depth of the source, traveling within the depth span of the VLA. This paper reviews the theory, discusses practical implementation issues, and presents the data analysis results.

  14. Data management and data analysis techniques in pharmacoepidemiological studies using a pre-planned multi-database approach

    DEFF Research Database (Denmark)

    Bazelier, Marloes T.; Eriksson, Irene; de Vries, Frank

    2015-01-01

    pharmacoepidemiological multi-database studies published from 2007 onwards that combined data for a pre-planned common analysis or quantitative synthesis. Information was retrieved about study characteristics, methods used for individual-level analyses and meta-analyses, data management and motivations for performing...... meta-analysis (27%), while a semi-aggregate approach was applied in three studies (14%). Information on central programming or heterogeneity assessment was missing in approximately half of the publications. Most studies were motivated by improving power (86%). CONCLUSIONS: Pharmacoepidemiological multi...

  15. Bridge Collapse Revealed By Multi-Temporal SAR Interferometry

    Science.gov (United States)

    Sousa, Joaquim; Bastos, Luisa

    2013-12-01

    On the night of March 4, 2001, the Hintze Ribeiro centennial Bridge, made of steel and concrete, collapsed in Entre-os-Rios (Northern Portugal), killing 59 people, including those in a bus and three cars that were attempting to reach the other side of the Douro River. It still remains the most serious road accident in the Portuguese history. In this work we do not intend to corroborate or contradict the official version of the accident causes, but only demonstrate the potential of Multi-Temporal Interferometric (MTI-InSAR) techniques for detection and monitoring of deformations in structures such as bridges, helping to prevent new catastrophic events. Based on the analysis of 57 ERS-1/2 covering the period from December 1992 to the fatality occurrence, we were able to detect significant movements (up to 20 mm/yr) in the section of the bridge that fell in the Douro River, obvious signs of the bridge instability.

  16. Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR

    Science.gov (United States)

    Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.

    2015-01-01

    Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.

  17. Contribution of multi-temporal remote sensing images to characterize landslide slip surface ‒ Application to the La Clapière landslide (France

    Directory of Open Access Journals (Sweden)

    B. Casson

    2005-01-01

    Full Text Available Landslide activity is partly controlled by the geometry of the slip surface. This activity is traduced at the surface by displacements and topographic variations. Consequently, multi-temporal remote sensing images can be used in order to characterize the geometry of landslide slip surface and its spatial and temporal evolution. Differential Digital Elevation Models (DEMs are obtained by subtracting two DEMs of different years. A method of multi-temporal images correlation allows to generate displacement maps that can be interpreted in terms of velocity and direction of movements. These data are then used to characterize qualitatively the geometry of the slip surface of the la Clapière landslide (French Southern Alps. Distribution of displacement vectors and of topographic variations are in accordance with a curved slip surface, characterizing a preferential rotational behaviour of this landslide. On the other hand, a spatial and temporal evolution of the geometry of the slip surface is pointed out. Indeed, a propagation of the slip surface under the Iglière bar, in the W part of the landslide, is suspected and can be linked to the acceleration of the landslide in 1987. This study shows the high potential of multi-temporal remote sensing images for slip surface characterization. Although this method could not replace in situ investigations, it can really help to well distribute geophysical profiles or boreholes on unstable areas.

  18. Multi-GNSS phase delay estimation and PPP ambiguity resolution: GPS, BDS, GLONASS, Galileo

    Science.gov (United States)

    Li, Xingxing; Li, Xin; Yuan, Yongqiang; Zhang, Keke; Zhang, Xiaohong; Wickert, Jens

    2018-06-01

    This paper focuses on the precise point positioning (PPP) ambiguity resolution (AR) using the observations acquired from four systems: GPS, BDS, GLONASS, and Galileo (GCRE). A GCRE four-system uncalibrated phase delay (UPD) estimation model and multi-GNSS undifferenced PPP AR method were developed in order to utilize the observations from all systems. For UPD estimation, the GCRE-combined PPP solutions of the globally distributed MGEX and IGS stations are performed to obtain four-system float ambiguities and then UPDs of GCRE satellites can be precisely estimated from these ambiguities. The quality of UPD products in terms of temporal stability and residual distributions is investigated for GPS, BDS, GLONASS, and Galileo satellites, respectively. The BDS satellite-induced code biases were corrected for GEO, IGSO, and MEO satellites before the UPD estimation. The UPD results of global and regional networks were also evaluated for Galileo and BDS, respectively. As a result of the frequency-division multiple-access strategy of GLONASS, the UPD estimation was performed using a network of homogeneous receivers including three commonly used GNSS receivers (TRIMBLE NETR9, JAVAD TRE_G3TH DELTA, and LEICA). Data recorded from 140 MGEX and IGS stations for a 30-day period in January in 2017 were used to validate the proposed GCRE UPD estimation and multi-GNSS dual-frequency PPP AR. Our results show that GCRE four-system PPP AR enables the fastest time to first fix (TTFF) solutions and the highest accuracy for all three coordinate components compared to the single and dual system. An average TTFF of 9.21 min with 7{°} cutoff elevation angle can be achieved for GCRE PPP AR, which is much shorter than that of GPS (18.07 min), GR (12.10 min), GE (15.36 min) and GC (13.21 min). With observations length of 10 min, the positioning accuracy of the GCRE fixed solution is 1.84, 1.11, and 1.53 cm, while the GPS-only result is 2.25, 1.29, and 9.73 cm for the east, north, and vertical

  19. Utilization of accident databases and fuzzy sets to estimate frequency of HazMat transport accidents

    International Nuclear Information System (INIS)

    Qiao Yuanhua; Keren, Nir; Mannan, M. Sam

    2009-01-01

    Risk assessment and management of transportation of hazardous materials (HazMat) require the estimation of accident frequency. This paper presents a methodology to estimate hazardous materials transportation accident frequency by utilizing publicly available databases and expert knowledge. The estimation process addresses route-dependent and route-independent variables. Negative binomial regression is applied to an analysis of the Department of Public Safety (DPS) accident database to derive basic accident frequency as a function of route-dependent variables, while the effects of route-independent variables are modeled by fuzzy logic. The integrated methodology provides the basis for an overall transportation risk analysis, which can be used later to develop a decision support system.

  20. An Unsupervised Method of Change Detection in Multi-Temporal PolSAR Data Using a Test Statistic and an Improved K&I Algorithm

    Directory of Open Access Journals (Sweden)

    Jinqi Zhao

    2017-12-01

    Full Text Available In recent years, multi-temporal imagery from spaceborne sensors has provided a fast and practical means for surveying and assessing changes in terrain surfaces. Owing to the all-weather imaging capability, polarimetric synthetic aperture radar (PolSAR has become a key tool for change detection. Change detection methods include both unsupervised and supervised methods. Supervised change detection, which needs some human intervention, is generally ineffective and impractical. Due to this limitation, unsupervised methods are widely used in change detection. The traditional unsupervised methods only use a part of the polarization information, and the required thresholding algorithms are independent of the multi-temporal data, which results in the change detection map being ineffective and inaccurate. To solve these problems, a novel method of change detection using a test statistic based on the likelihood ratio test and the improved Kittler and Illingworth (K&I minimum-error thresholding algorithm is introduced in this paper. The test statistic is used to generate the comparison image (CI of the multi-temporal PolSAR images, and improved K&I using a generalized Gaussian model simulates the distribution of the CI. As a result of these advantages, we can obtain the change detection map using an optimum threshold. The efficiency of the proposed method is demonstrated by the use of multi-temporal PolSAR images acquired by RADARSAT-2 over Wuhan, China. The experimental results show that the proposed method is effective and highly accurate.

  1. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    Science.gov (United States)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  2. Reclaimed mineland curve number response to temporal distribution of rainfall

    Science.gov (United States)

    Warner, R.C.; Agouridis, C.T.; Vingralek, P.T.; Fogle, A.W.

    2010-01-01

    The curve number (CN) method is a common technique to estimate runoff volume, and it is widely used in coal mining operations such as those in the Appalachian region of Kentucky. However, very little CN data are available for watersheds disturbed by surface mining and then reclaimed using traditional techniques. Furthermore, as the CN method does not readily account for variations in infiltration rates due to varying rainfall distributions, the selection of a single CN value to encompass all temporal rainfall distributions could lead engineers to substantially under- or over-size water detention structures used in mining operations or other land uses such as development. Using rainfall and runoff data from a surface coal mine located in the Cumberland Plateau of eastern Kentucky, CNs were computed for conventionally reclaimed lands. The effects of temporal rainfall distributions on CNs was also examined by classifying storms as intense, steady, multi-interval intense, or multi-interval steady. Results indicate that CNs for such reclaimed lands ranged from 62 to 94 with a mean value of 85. Temporal rainfall distributions were also shown to significantly affect CN values with intense storms having significantly higher CNs than multi-interval storms. These results indicate that a period of recovery is present between rainfall bursts of a multi-interval storm that allows depressional storage and infiltration rates to rebound. ?? 2010 American Water Resources Association.

  3. POTENTIAL OF MULTI-TEMPORAL OBLIQUE AIRBORNE IMAGERY FOR STRUCTURAL DAMAGE ASSESSMENT

    Directory of Open Access Journals (Sweden)

    A. Vetrivel

    2016-06-01

    Full Text Available Quick post-disaster actions demand automated, rapid and detailed building damage assessment. Among the available technologies, post-event oblique airborne images have already shown their potential for this task. However, existing methods usually compensate the lack of pre-event information with aprioristic assumptions of building shapes and textures that can lead to uncertainties and misdetections. However, oblique images have been already captured over many cities of the world, and the exploitation of pre- and post-event data as inputs to damage assessment is readily feasible in urban areas. In this paper, we investigate the potential of multi-temporal oblique imagery for detailed damage assessment focusing on two methodologies: the first method aims at detecting severe structural damages related to geometrical deformation by combining the complementary information provided by photogrammetric point clouds and oblique images. The developed method detected 87% of damaged elements. The failed detections are due to varying noise levels within the point cloud which hindered the recognition of some structural elements. We observed, in general that the façade regions are very noisy in point clouds. To address this, we propose our second method which aims to detect damages to building façades using the oriented oblique images. The results show that the proposed methodology can effectively differentiate among the three proposed categories: collapsed/highly damaged, lower levels of damage and undamaged buildings, using a computationally light-weight approach. We describe the implementations of the above mentioned methods in detail and present the promising results achieved using multi-temporal oblique imagery over the city of L’Aquila (Italy.

  4. A Streaming Algorithm for Online Estimation of Temporal and Spatial Extent of Delays

    Directory of Open Access Journals (Sweden)

    Kittipong Hiriotappa

    2017-01-01

    Full Text Available Knowing traffic congestion and its impact on travel time in advance is vital for proactive travel planning as well as advanced traffic management. This paper proposes a streaming algorithm to estimate temporal and spatial extent of delays online which can be deployed with roadside sensors. First, the proposed algorithm uses streaming input from individual sensors to detect a deviation from normal traffic patterns, referred to as anomalies, which is used as an early indication of delay occurrence. Then, a group of consecutive sensors that detect anomalies are used to temporally and spatially estimate extent of delay associated with the detected anomalies. Performance evaluations are conducted using a real-world data set collected by roadside sensors in Bangkok, Thailand, and the NGSIM data set collected in California, USA. Using NGSIM data, it is shown qualitatively that the proposed algorithm can detect consecutive occurrences of shockwaves and estimate their associated delays. Then, using a data set from Thailand, it is shown quantitatively that the proposed algorithm can detect and estimate delays associated with both recurring congestion and incident-induced nonrecurring congestion. The proposed algorithm also outperforms the previously proposed streaming algorithm.

  5. Determination of the Impact of Urbanization on Agricultural Lands using Multi-temporal Satellite Sensor Images

    Science.gov (United States)

    Kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2015-12-01

    Throughout the history, agricultural activities have been performed close to urban areas. Main reason behind this phenomenon is the need of fast marketing of the agricultural production to urban residents and financial provision. Thus, using the areas nearby cities for agricultural activities brings out advantage of easy transportation of productions and fast marketing. For decades, heavy migration to cities has directly and negatively affected natural grasslands, forests and agricultural lands. This pressure has caused agricultural lands to be changed into urban areas. Dense urbanization causes increase in impervious surfaces, heat islands and many other problems in addition to destruction of agricultural lands. Considering the negative impacts of urbanization on agricultural lands and natural resources, a periodic monitoring of these changes becomes indisputably important. At this point, satellite images are known to be good data sources for land cover / use change monitoring with their fast data acquisition, large area coverages and temporal resolution properties. Classification of the satellite images provides thematic the land cover / use maps of the earth surface and changes can be determined with GIS based analysis multi-temporal maps. In this study, effects of heavy urbanization over agricultural lands in Istanbul, metropolitan city of Turkey, were investigated with use of multi-temporal Landsat TM satellite images acquired between 1984 and 2011. Images were geometrically registered to each other and classified using supervised maximum likelihood classification algorithm. Resulting thematic maps were exported to GIS environment and destructed agricultural lands by urbanization were determined using spatial analysis.

  6. Multi-agent search for source localization in a turbulent medium

    International Nuclear Information System (INIS)

    Hajieghrary, Hadi; Hsieh, M. Ani; Schwartz, Ira B.

    2016-01-01

    We extend the gradient-less search strategy referred to as “infotaxis” to a distributed multi-agent system. “Infotaxis” is a search strategy that uses sporadic sensor measurements to determine the source location of materials dispersed in a turbulent medium. In this work, we leverage the spatio-temporal sensing capabilities of a mobile sensing agents to optimize the time spent finding and localizing the position of the source using a multi-agent collaborative search strategy. Our results suggest that the proposed multi-agent collaborative search strategy leverages the team's ability to obtain simultaneous measurements at different locations to speed up the search process. We present a multi-agent collaborative “infotaxis” strategy that uses the relative entropy of the system to synthesize a suitable search strategy for the team. The result is a collaborative information theoretic search strategy that results in control actions that maximize the information gained by the team, and improves estimates of the source position. - Highlights: • We extend the gradient-less infotaxis search strategy to a distributed multi-agent system. • Leveraging the spatio-temporal sensing capabilities of a team of mobile sensing agents speeds up the search process. • The resulting information theoretic search strategy maximizes the information gained and improves the estimate of the source position.

  7. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Shouguo Yang

    2015-12-01

    Full Text Available A novel spatio-temporal 2-dimensional (2-D processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD and direction of arrival (DOA, and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  8. Estimating Gross Primary Production in Cropland with High Spatial and Temporal Scale Remote Sensing Data

    Science.gov (United States)

    Lin, S.; Li, J.; Liu, Q.

    2018-04-01

    Satellite remote sensing data provide spatially continuous and temporally repetitive observations of land surfaces, and they have become increasingly important for monitoring large region of vegetation photosynthetic dynamic. But remote sensing data have their limitation on spatial and temporal scale, for example, higher spatial resolution data as Landsat data have 30-m spatial resolution but 16 days revisit period, while high temporal scale data such as geostationary data have 30-minute imaging period, which has lower spatial resolution (> 1 km). The objective of this study is to investigate whether combining high spatial and temporal resolution remote sensing data can improve the gross primary production (GPP) estimation accuracy in cropland. For this analysis we used three years (from 2010 to 2012) Landsat based NDVI data, MOD13 vegetation index product and Geostationary Operational Environmental Satellite (GOES) geostationary data as input parameters to estimate GPP in a small region cropland of Nebraska, US. Then we validated the remote sensing based GPP with the in-situ measurement carbon flux data. Results showed that: 1) the overall correlation between GOES visible band and in-situ measurement photosynthesis active radiation (PAR) is about 50 % (R2 = 0.52) and the European Center for Medium-Range Weather Forecasts ERA-Interim reanalysis data can explain 64 % of PAR variance (R2 = 0.64); 2) estimating GPP with Landsat 30-m spatial resolution data and ERA daily meteorology data has the highest accuracy(R2 = 0.85, RMSE MODIS 1-km NDVI/EVI product import; 3) using daily meteorology data as input for GPP estimation in high spatial resolution data would have higher relevance than 8-day and 16-day input. Generally speaking, using the high spatial resolution and high frequency satellite based remote sensing data can improve GPP estimation accuracy in cropland.

  9. Geo-Parcel Based Crop Identification by Integrating High Spatial-Temporal Resolution Imagery from Multi-Source Satellite Data

    Directory of Open Access Journals (Sweden)

    Yingpin Yang

    2017-12-01

    Full Text Available Geo-parcel based crop identification plays an important role in precision agriculture. It meets the needs of refined farmland management. This study presents an improved identification procedure for geo-parcel based crop identification by combining fine-resolution images and multi-source medium-resolution images. GF-2 images with fine spatial resolution of 0.8 m provided agricultural farming plot boundaries, and GF-1 (16 m and Landsat 8 OLI data were used to transform the geo-parcel based enhanced vegetation index (EVI time-series. In this study, we propose a piecewise EVI time-series smoothing method to fit irregular time profiles, especially for crop rotation situations. Global EVI time-series were divided into several temporal segments, from which phenological metrics could be derived. This method was applied to Lixian, where crop rotation was the common practice of growing different types of crops, in the same plot, in sequenced seasons. After collection of phenological features and multi-temporal spectral information, Random Forest (RF was performed to classify crop types, and the overall accuracy was 93.27%. Moreover, an analysis of feature significance showed that phenological features were of greater importance for distinguishing agricultural land cover compared to temporal spectral information. The identification results indicated that the integration of high spatial-temporal resolution imagery is promising for geo-parcel based crop identification and that the newly proposed smoothing method is effective.

  10. LCDs are better: psychophysical and photometric estimates of the temporal characteristics of CRT and LCD monitors.

    Science.gov (United States)

    Lagroix, Hayley E P; Yanko, Matthew R; Spalek, Thomas M

    2012-07-01

    Many cognitive and perceptual phenomena, such as iconic memory and temporal integration, require brief displays. A critical requirement is that the image not remain visible after its offset. It is commonly believed that liquid crystal displays (LCD) are unsuitable because of their poor temporal response characteristics relative to cathode-ray-tube (CRT) screens. Remarkably, no psychophysical estimates of visible persistence are available to verify this belief. A series of experiments in which white stimuli on a black background produced discernible persistence on CRT but not on LCD screens, during both dark- and light-adapted viewing, falsified this belief. Similar estimates using black stimuli on a white background produced no visible persistence on either screen. That said, photometric measurements are available that seem to confirm the poor temporal characteristics of LCD screens, but they were obtained before recent advances in LCD technology. Using current LCD screens, we obtained photometric estimates of rise time far shorter (1-6 ms) than earlier estimates (20-150 ms), and approaching those of CRTs (<1 ms). We conclude that LCDs are preferable to CRTs when visible persistence is a concern, except when black-on-white displays are used.

  11. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  12. Risk estimates for hip fracture from clinical and densitometric variables and impact of database selection in Lebanese subjects.

    Science.gov (United States)

    Badra, Mohammad; Mehio-Sibai, Abla; Zeki Al-Hazzouri, Adina; Abou Naja, Hala; Baliki, Ghassan; Salamoun, Mariana; Afeiche, Nadim; Baddoura, Omar; Bulos, Suhayl; Haidar, Rachid; Lakkis, Suhayl; Musharrafieh, Ramzi; Nsouli, Afif; Taha, Assaad; Tayim, Ahmad; El-Hajj Fuleihan, Ghada

    2009-01-01

    Bone mineral density (BMD) and fracture incidence vary greatly worldwide. The data, if any, on clinical and densitometric characteristics of patients with hip fractures from the Middle East are scarce. The objective of the study was to define risk estimates from clinical and densitometric variables and the impact of database selection on such estimates. Clinical and densitometric information were obtained in 60 hip fracture patients and 90 controls. Hip fracture subjects were 74 yr (9.4) old, were significantly taller, lighter, and more likely to be taking anxiolytics and sleeping pills than controls. National Health and Nutrition Examination Survey (NHANES) database selection resulted in a higher sensitivity and almost equal specificity in identifying patients with a hip fracture compared with the Lebanese database. The odds ratio (OR) and its confidence interval (CI) for hip fracture per standard deviation (SD) decrease in total hip BMD was 2.1 (1.45-3.05) with the NHANES database, and 2.11 (1.36-2.37) when adjusted for age and body mass index (BMI). Risk estimates were higher in male compared with female subjects. In Lebanese subjects, BMD- and BMI-derived hip fracture risk estimates are comparable to western standards. The study validates the universal use of the NHANES database, and the applicability of BMD- and BMI-derived risk fracture estimates in the World Health Organization (WHO) global fracture risk model, to the Lebanese.

  13. Estimation of Vegetable Crop Parameter by Multi-temporal UAV-Borne Images

    Directory of Open Access Journals (Sweden)

    Thomas Moeckel

    2018-05-01

    Full Text Available 3D point cloud analysis of imagery collected by unmanned aerial vehicles (UAV has been shown to be a valuable tool for estimation of crop phenotypic traits, such as plant height, in several species. Spatial information about these phenotypic traits can be used to derive information about other important crop characteristics, like fresh biomass yield, which could not be derived directly from the point clouds. Previous approaches have often only considered single date measurements using a single point cloud derived metric for the respective trait. Furthermore, most of the studies focused on plant species with a homogenous canopy surface. The aim of this study was to assess the applicability of UAV imagery for capturing crop height information of three vegetables (crops eggplant, tomato, and cabbage with a complex vegetation canopy surface during a complete crop growth cycle to infer biomass. Additionally, the effect of crop development stage on the relationship between estimated crop height and field measured crop height was examined. Our study was conducted in an experimental layout at the University of Agricultural Science in Bengaluru, India. For all the crops, the crop height and the biomass was measured at five dates during one crop growth cycle between February and May 2017 (average crop height was 42.5, 35.5, and 16.0 cm for eggplant, tomato, and cabbage. Using a structure from motion approach, a 3D point cloud was created for each crop and sampling date. In total, 14 crop height metrics were extracted from the point clouds. Machine learning methods were used to create prediction models for vegetable crop height. The study demonstrates that the monitoring of crop height using an UAV during an entire growing period results in detailed and precise estimates of crop height and biomass for all three crops (R2 ranging from 0.87 to 0.97, bias ranging from −0.66 to 0.45 cm. The effect of crop development stage on the predicted crop height was

  14. Multi-view 3D Human Pose Estimation in Complex Environment

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrila, D.M.

    2012-01-01

    We introduce a framework for unconstrained 3D human upper body pose estimation from multiple camera views in complex environment. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model texture adaptation. Single-frame pose recovery

  15. How well do the GCMs/RCMs capture the multi-scale temporal variability of precipitation in the Southwestern United States?

    Science.gov (United States)

    Jiang, Peng; Gautam, Mahesh R.; Zhu, Jianting; Yu, Zhongbo

    2013-02-01

    SummaryMulti-scale temporal variability of precipitation has an established relationship with floods and droughts. In this paper, we present the diagnostics on the ability of 16 General Circulation Models (GCMs) from Bias Corrected and Downscaled (BCSD) World Climate Research Program's (WCRP's) Coupled Model Inter-comparison Project Phase 3 (CMIP3) projections and 10 Regional Climate Models (RCMs) that participated in the North American Regional Climate Change Assessment Program (NARCCAP) to represent multi-scale temporal variability determined from the observed station data. Four regions (Los Angeles, Las Vegas, Tucson, and Cimarron) in the Southwest United States are selected as they represent four different precipitation regions classified by clustering method. We investigate how storm properties and seasonal, inter-annual, and decadal precipitation variabilities differed between GCMs/RCMs and observed records in these regions. We find that current GCMs/RCMs tend to simulate longer storm duration and lower storm intensity compared to those from observed records. Most GCMs/RCMs fail to produce the high-intensity summer storms caused by local convective heat transport associated with the summer monsoon. Both inter-annual and decadal bands are present in the GCM/RCM-simulated precipitation time series; however, these do not line up to the patterns of large-scale ocean oscillations such as El Nino/La Nina Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). Our results show that the studied GCMs/RCMs can capture long-term monthly mean as the examined data is bias-corrected and downscaled, but fail to simulate the multi-scale precipitation variability including flood generating extreme events, which suggests their inadequacy for studies on floods and droughts that are strongly associated with multi-scale temporal precipitation variability.

  16. The Amma-Sat Database

    Science.gov (United States)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on

  17. Data management and data analysis techniques in pharmacoepidemiological studies using a pre-planned multi-database approach: a systematic literature review.

    Science.gov (United States)

    Bazelier, Marloes T; Eriksson, Irene; de Vries, Frank; Schmidt, Marjanka K; Raitanen, Jani; Haukka, Jari; Starup-Linde, Jakob; De Bruin, Marie L; Andersen, Morten

    2015-09-01

    To identify pharmacoepidemiological multi-database studies and to describe data management and data analysis techniques used for combining data. Systematic literature searches were conducted in PubMed and Embase complemented by a manual literature search. We included pharmacoepidemiological multi-database studies published from 2007 onwards that combined data for a pre-planned common analysis or quantitative synthesis. Information was retrieved about study characteristics, methods used for individual-level analyses and meta-analyses, data management and motivations for performing the study. We found 3083 articles by the systematic searches and an additional 176 by the manual search. After full-text screening of 75 articles, 22 were selected for final inclusion. The number of databases used per study ranged from 2 to 17 (median = 4.0). Most studies used a cohort design (82%) instead of a case-control design (18%). Logistic regression was most often used for individual-level analyses (41%), followed by Cox regression (23%) and Poisson regression (14%). As meta-analysis method, a majority of the studies combined individual patient data (73%). Six studies performed an aggregate meta-analysis (27%), while a semi-aggregate approach was applied in three studies (14%). Information on central programming or heterogeneity assessment was missing in approximately half of the publications. Most studies were motivated by improving power (86%). Pharmacoepidemiological multi-database studies are a well-powered strategy to address safety issues and have increased in popularity. To be able to correctly interpret the results of these studies, it is important to systematically report on database management and analysis techniques, including central programming and heterogeneity testing. © 2015 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd.

  18. Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies.

    Science.gov (United States)

    Ferraz, A.; Painter, T. H.; Saatchi, S.; Bormann, K. J.

    2016-12-01

    Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies The NASA Jet Propulsion Laboratory developed the Airborne Snow Observatory (ASO), a coupled scanning lidar system and imaging spectrometer, to quantify the spatial distribution of snow volume and dynamics over mountains watersheds (Painter et al., 2015). To do this, ASO weekly over-flights mountainous areas during snowfall and snowmelt seasons. In addition, there are additional flights in snow-off conditions to calculate Digital Terrain Models (DTM). In this study, we focus on the reliability of ASO lidar data to characterize the 3D forest vegetation structure. The density of a single point cloud acquisition is of nearly 1 pt/m2, which is not optimal to properly characterize vegetation. However, ASO covers a given study site up to 14 times a year that enables computing a high-resolution point cloud by merging single acquisitions. In this study, we present a method to automatically register ASO multi-temporal lidar 3D point clouds. Although flight specifications do not change between acquisition dates, lidar datasets might have significant planimetric shifts due to inaccuracies in platform trajectory estimation introduced by the GPS system and drifts of the IMU. There are a large number of methodologies that address the problem of 3D data registration (Gressin et al., 2013). Briefly, they look for common primitive features in both datasets such as buildings corners, structures like electric poles, DTM breaklines or deformations. However, they are not suited for our experiment. First, single acquisition point clouds have low density that makes the extraction of primitive features difficult. Second, the landscape significantly changes between flights due to snowfall and snowmelt. Therefore, we developed a method to automatically register point clouds using tree apexes as keypoints because they are features that are supposed to experience little change

  19. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    2016-01-01

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system is closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.

  20. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms

    Directory of Open Access Journals (Sweden)

    Bangyan Zhu

    2016-07-01

    Full Text Available Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.

  1. Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data

    International Nuclear Information System (INIS)

    Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M

    2006-01-01

    The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We

  2. Evaluating the impact of spatio-temporal smoothness constraints on the BOLD hemodynamic response function estimation: an analysis based on Tikhonov regularization

    International Nuclear Information System (INIS)

    Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A

    2009-01-01

    Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)

  3. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    and space. This paper presents a new estimator (STC-MLE), which incorporates the correlation property. It is an expansion of the maximum likelihood estimator (MLE) developed by Ferrara et al. With the MLE a cross-correlation analysis between consecutive RF-lines on complex form is carried out for a range...... of possible velocities. In the new estimator an additional similarity investigation for each evaluated velocity and the available velocity estimates in a temporal (between frames) and spatial (within frames) neighborhood is performed. An a priori probability density term in the distribution...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...

  4. Multi-temporal maps of the Montaguto earth flow in southern Italy from 1954 to 2010

    Science.gov (United States)

    Guerriero, Luigi; Revellino, Paola; Coe, Jeffrey A.; Focareta, Mariano; Grelle, Gerardo; Albanese, Vincenzo; Corazza, Angelo; Guadagno, Francesco M.

    2013-01-01

    Historical movement of the Montaguto earth flow in southern Italy has periodically destroyed residences and farmland, and damaged the Italian National Road SS90 and the Benevento-Foggia National Railway. This paper provides maps from an investigation into the evolution of the Montaguto earth flow from 1954 to 2010. We used aerial photos, topographic maps, LiDAR data, satellite images, and field observations to produce multi-temporal maps. The maps show the spatial and temporal distribution of back-tilted surfaces, flank ridges, and normal, thrust, and strike-slip faults. Springs, creeks, and ponds are also shown on the maps. The maps provide a basis for interpreting how basal and lateral boundary geometries influence earth-flow behavior and surface-water hydrology.

  5. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    Science.gov (United States)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  6. Monitoring Ground Deformation of Subway Area during the Construction Based on the Method of Multi-Temporal Coherent Targets Analysis

    Science.gov (United States)

    Zhang, L.; Wu, J.; Zhao, J.; Yuan, M.

    2018-04-01

    Multi-temporal coherent targets analysis is a high-precision and high-spatial-resolution monitoring method for urban surface deformation based on Differential Synthetic Aperture Radar (DInSAR), and has been successfully applied to measure land subsidence, landslide and strain accumulation caused by fault movement and so on. In this paper, the multi-temporal coherent targets analysis is used to study the settlement of subway area during the period of subway construction. The eastern extension of Shanghai Metro Line. 2 is taking as an example to study the subway settlement during the construction period. The eastern extension of Shanghai Metro Line. 2 starts from Longyang Road and ends at Pudong airport. Its length is 29.9 kilometers from east to west and it is a key transportation line to the Pudong Airport. 17 PalSAR images during 2007 and 2010 are applied to analyze and invert the settlement of the buildings nearby the subway based on the multi-temporal coherent targets analysis. But there are three significant deformation areas nearby the Line 2 between 2007 and 2010, with maximum subsidence rate up to 30 mm/y in LOS. The settlement near the Longyang Road station and Chuansha Town are both caused by newly construction and city expansion. The deformation of the coastal dikes suffer from heavy settlement and the rate is up to -30 mm/y. In general, the area close to the subway line is relatively stable during the construction period.

  7. Multi-location model for the estimation of the horizontal daily diffuse fraction of solar radiation in Europe

    International Nuclear Information System (INIS)

    Bortolini, Marco; Gamberi, Mauro; Graziani, Alessandro; Manzini, Riccardo; Mora, Cristina

    2013-01-01

    Highlights: ► A multi-location model to estimate solar radiation components is proposed. ► Proposed model joins solar radiation data from several weather stations. ► Clearness index is correlated to the diffuse component through analytic functions. ► Third degree polynomial function best fits data for annual and seasonal scenarios. ► A quality control procedure and independent datasets strength model performances. - Abstract: Hourly and daily solar radiation data are crucial for the design of energy systems based on the solar source. Global irradiance, measured on the horizontal plane, is, generally, available from weather station databases. The direct and diffuse fractions are measured rarely and should be analytically calculated for many geographical locations. Aim of this paper is to present a multi-location model to estimate the expected profiles of the horizontal daily diffuse component of solar radiation. It focuses on the European (EU) geographical area joining data from 44 weather stations located in 11 countries. Data are collected by the World Radiation Data Centre (WRDC) between 2004 and 2007. Different analytic functions, correlating the daily diffuse fraction of solar radiation to the clearness index, are calculated and compared to outline the analytic expressions of the best fitting curves. The effect of seasonality on solar irradiance is considered developing summer and winter scenarios together with annual models. Similarities among the trends for the 4 years are, further, discussed. The most adopted statistical indices are used as key performance factors. Finally, data from three locations not included in the dataset considered for model development allow to test the proposed approach against an independent dataset. Obtained results show the effectiveness of adopting a multi-location approach to estimate solar radiation components on the horizontal surface instead of developing several single location models. This is due to the increase

  8. ESTIMATING GROSS PRIMARY PRODUCTION IN CROPLAND WITH HIGH SPATIAL AND TEMPORAL SCALE REMOTE SENSING DATA

    Directory of Open Access Journals (Sweden)

    S. Lin

    2018-04-01

    Full Text Available Satellite remote sensing data provide spatially continuous and temporally repetitive observations of land surfaces, and they have become increasingly important for monitoring large region of vegetation photosynthetic dynamic. But remote sensing data have their limitation on spatial and temporal scale, for example, higher spatial resolution data as Landsat data have 30-m spatial resolution but 16 days revisit period, while high temporal scale data such as geostationary data have 30-minute imaging period, which has lower spatial resolution (> 1 km. The objective of this study is to investigate whether combining high spatial and temporal resolution remote sensing data can improve the gross primary production (GPP estimation accuracy in cropland. For this analysis we used three years (from 2010 to 2012 Landsat based NDVI data, MOD13 vegetation index product and Geostationary Operational Environmental Satellite (GOES geostationary data as input parameters to estimate GPP in a small region cropland of Nebraska, US. Then we validated the remote sensing based GPP with the in-situ measurement carbon flux data. Results showed that: 1 the overall correlation between GOES visible band and in-situ measurement photosynthesis active radiation (PAR is about 50 % (R2 = 0.52 and the European Center for Medium-Range Weather Forecasts ERA-Interim reanalysis data can explain 64 % of PAR variance (R2 = 0.64; 2 estimating GPP with Landsat 30-m spatial resolution data and ERA daily meteorology data has the highest accuracy(R2 = 0.85, RMSE < 3 gC/m2/day, which has better performance than using MODIS 1-km NDVI/EVI product import; 3 using daily meteorology data as input for GPP estimation in high spatial resolution data would have higher relevance than 8-day and 16-day input. Generally speaking, using the high spatial resolution and high frequency satellite based remote sensing data can improve GPP estimation accuracy in cropland.

  9. Improving the effectiveness of real-time flood forecasting through Predictive Uncertainty estimation: the multi-temporal approach

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Todini, Ezio

    2015-04-01

    The negative effects of severe flood events are usually contrasted through structural measures that, however, do not fully eliminate flood risk. Non-structural measures, such as real-time flood forecasting and warning, are also required. Accurate stage/discharge future predictions with appropriate forecast lead-time are sought by decision-makers for implementing strategies to mitigate the adverse effects of floods. Traditionally, flood forecasting has been approached by using rainfall-runoff and/or flood routing modelling. Indeed, both types of forecasts, cannot be considered perfectly representing future outcomes because of lacking of a complete knowledge of involved processes (Todini, 2004). Nonetheless, although aware that model forecasts are not perfectly representing future outcomes, decision makers are de facto implicitly assuming the forecast of water level/discharge/volume, etc. as "deterministic" and coinciding with what is going to occur. Recently the concept of Predictive Uncertainty (PU) was introduced in hydrology (Krzysztofowicz, 1999), and several uncertainty processors were developed (Todini, 2008). PU is defined as the probability of occurrence of the future realization of a predictand (water level/discharge/volume) conditional on: i) prior observations and knowledge, ii) the available information obtained on the future value, typically provided by one or more forecast models. Unfortunately, PU has been frequently interpreted as a measure of lack of accuracy rather than the appropriate tool allowing to take the most appropriate decisions, given a model or several models' forecasts. With the aim to shed light on the benefits for appropriately using PU, a multi-temporal approach based on the MCP approach (Todini, 2008; Coccia and Todini, 2011) is here applied to stage forecasts at sites along the Upper Tiber River. Specifically, the STAge Forecasting-Rating Curve Model Muskingum-based (STAFOM-RCM) (Barbetta et al., 2014) along with the Rating

  10. The Estimation of Gestational Age at Birth in Database Studies.

    Science.gov (United States)

    Eberg, Maria; Platt, Robert W; Filion, Kristian B

    2017-11-01

    Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.

  11. Multi-wavelength and multi-colour temporal and spatial optical solitons

    DEFF Research Database (Denmark)

    Kivshar, Y. S.; Sukhorukov, A. A.; Ostrovskaya, E. A.

    2000-01-01

    We present an overview of several novel types of multi- component envelope solitary waves that appear in fiber and waveguide nonlinear optics. In particular, we describe multi-channel solitary waves in bit-parallel-wavelength fiber transmission systems for high performance computer networks, multi......-color parametric spatial solitary waves due to cascaded nonlinearities of quadratic materials, and quasiperiodic envelope solitons in Fibonacci optical superlattices....

  12. Analysis of Low Frequency Oscillation Using the Multi-Interval Parameter Estimation Method on a Rolling Blackout in the KEPCO System

    Directory of Open Access Journals (Sweden)

    Kwan-Shik Shim

    2017-04-01

    Full Text Available This paper describes a multiple time interval (“multi-interval” parameter estimation method. The multi-interval parameter estimation method estimates a parameter from a new multi-interval prediction error polynomial that can simultaneously consider multiple time intervals. The root of the multi-interval prediction error polynomial includes the effect on each time interval, and the important mode can be estimated by solving one polynomial for multiple time intervals or signals. The algorithm of the multi-interval parameter estimation method proposed in this paper is applied to the test function and the data measured from a PMU (phasor measurement unit installed in the KEPCO (Korea Electric Power Corporation system. The results confirm that the proposed multi-interval parameter estimation method accurately and reliably estimates important parameters.

  13. Mapping paddy rice distribution using multi-temporal Landsat imagery in the Sanjiang Plain, northeast China

    Science.gov (United States)

    XIAO, Xiangming; DONG, Jinwei; QIN, Yuanwei; WANG, Zongming

    2016-01-01

    Information of paddy rice distribution is essential for food production and methane emission calculation. Phenology-based algorithms have been utilized in the mapping of paddy rice fields by identifying the unique flooding and seedling transplanting phases using multi-temporal moderate resolution (500 m to 1 km) images. In this study, we developed simple algorithms to identify paddy rice at a fine resolution at the regional scale using multi-temporal Landsat imagery. Sixteen Landsat images from 2010–2012 were used to generate the 30 m paddy rice map in the Sanjiang Plain, northeast China—one of the major paddy rice cultivation regions in China. Three vegetation indices, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Land Surface Water Index (LSWI), were used to identify rice fields during the flooding/transplanting and ripening phases. The user and producer accuracies of paddy rice on the resultant Landsat-based paddy rice map were 90% and 94%, respectively. The Landsat-based paddy rice map was an improvement over the paddy rice layer on the National Land Cover Dataset, which was generated through visual interpretation and digitalization on the fine-resolution images. The agricultural census data substantially underreported paddy rice area, raising serious concern about its use for studies on food security. PMID:27695637

  14. Estimation of Human Hip and Knee Multi-Joint Dynamics Using the LOPES Gait Trainer

    NARCIS (Netherlands)

    Koopman, Hubertus F.J.M.; van Asseldonk, Edwin H.F.; van der Kooij, Herman

    2016-01-01

    In this study, we present and evaluate a novel method to estimate multi-joint leg impedance, using a robotic gait training device. The method is based on multi-input–multi-output system identification techniques and is designed for continuous torque perturbations at the hip and knee joint

  15. How many plans are needed in an IMRT multi-objective plan database?

    International Nuclear Information System (INIS)

    Craft, David; Bortfeld, Thomas

    2008-01-01

    In multi-objective radiotherapy planning, we are interested in Pareto surfaces of dimensions 2 up to about 10 (for head and neck cases, the number of structures to trade off can be this large). A key question that has not been answered yet is: how many plans does it take to sufficiently represent a high-dimensional Pareto surface? In this paper, we present a method to answer this question, and we show that the number of points needed is modest: 75 plans always controlled the error to within 5%, and in all cases but one, N + 1 plans, where N is the number of objectives, was enough for <15% error. We introduce objective correlation matrices and principal component analysis (PCA) of the beamlet solutions as two methods to understand this. PCA reveals that the feasible beamlet solutions of a Pareto database lie in a narrow, small dimensional subregion of the full beamlet space, which helps explain why the number of plans needed to characterize the database is small

  16. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography.

    Science.gov (United States)

    Zamir, Ehud; Kong, George Y X; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-07-01

    We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance ("E") and (2) lateral photographic temporal limbus to cornea distance ("Z"). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. A strong linear correlation was found between EZR and ACD, R = -0.91, R 2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was -0.013 mm (range -0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations.

  17. Multi-parameter vital sign database to assist in alarm optimization for general care units.

    Science.gov (United States)

    Welch, James; Kanter, Benjamin; Skora, Brooke; McCombie, Scott; Henry, Isaac; McCombie, Devin; Kennedy, Rosemary; Soller, Babs

    2016-12-01

    Continual vital sign assessment on the general care, medical-surgical floor is expected to provide early indication of patient deterioration and increase the effectiveness of rapid response teams. However, there is concern that continual, multi-parameter vital sign monitoring will produce alarm fatigue. The objective of this study was the development of a methodology to help care teams optimize alarm settings. An on-body wireless monitoring system was used to continually assess heart rate, respiratory rate, SpO 2 and noninvasive blood pressure in the general ward of ten hospitals between April 1, 2014 and January 19, 2015. These data, 94,575 h for 3430 patients are contained in a large database, accessible with cloud computing tools. Simulation scenarios assessed the total alarm rate as a function of threshold and annunciation delay (s). The total alarm rate of ten alarms/patient/day predicted from the cloud-hosted database was the same as the total alarm rate for a 10 day evaluation (1550 h for 36 patients) in an independent hospital. Plots of vital sign distributions in the cloud-hosted database were similar to other large databases published by different authors. The cloud-hosted database can be used to run simulations for various alarm thresholds and annunciation delays to predict the total alarm burden experienced by nursing staff. This methodology might, in the future, be used to help reduce alarm fatigue without sacrificing the ability to continually monitor all vital signs.

  18. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    Science.gov (United States)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  19. A novel way to detect correlations on multi-time scales, with temporal evolution and for multi-variables

    Science.gov (United States)

    Yuan, Naiming; Xoplaki, Elena; Zhu, Congwen; Luterbacher, Juerg

    2016-06-01

    In this paper, two new methods, Temporal evolution of Detrended Cross-Correlation Analysis (TDCCA) and Temporal evolution of Detrended Partial-Cross-Correlation Analysis (TDPCCA), are proposed by generalizing DCCA and DPCCA. Applying TDCCA/TDPCCA, it is possible to study correlations on multi-time scales and over different periods. To illustrate their properties, we used two climatological examples: i) Global Sea Level (GSL) versus North Atlantic Oscillation (NAO); and ii) Summer Rainfall over Yangtze River (SRYR) versus previous winter Pacific Decadal Oscillation (PDO). We find significant correlations between GSL and NAO on time scales of 60 to 140 years, but the correlations are non-significant between 1865-1875. As for SRYR and PDO, significant correlations are found on time scales of 30 to 35 years, but the correlations are more pronounced during the recent 30 years. By combining TDCCA/TDPCCA and DCCA/DPCCA, we proposed a new correlation-detection system, which compared to traditional methods, can objectively show how two time series are related (on which time scale, during which time period). These are important not only for diagnosis of complex system, but also for better designs of prediction models. Therefore, the new methods offer new opportunities for applications in natural sciences, such as ecology, economy, sociology and other research fields.

  20. Multi-dimensional two-phase flow measurements in a large-diameter pipe using wire-mesh sensor

    International Nuclear Information System (INIS)

    Kanai, Taizo; Furuya, Masahiro; Arai, Takahiro; Shirakawa, Kenetsu; Nishi, Yoshihisa; Ueda, Nobuyuki

    2011-01-01

    The authors developed a method of measurement to determine the multi-dimensionality of two phase flow. A wire-mesh sensor (WMS) can acquire a void fraction distribution at a high temporal and spatial resolution and also estimate the velocity of a vertical rising flow by investigating the signal time-delay of the upstream WMS relative to downstream. Previously, one-dimensional velocity was estimated by using the same point of each WMS at a temporal resolution of 1.0 - 5.0 s. The authors propose to extend this time series analysis to estimate the multi-dimensional velocity profile via cross-correlation analysis between a point of upstream WMS and multiple points downstream. Bubbles behave in various ways according to size, which is used to classify them into certain groups via wavelet analysis before cross-correlation analysis. This method was verified by air-water straight and swirl flows within a large-diameter vertical pipe. A high-speed camera is used to set the parameter of cross-correlation analysis. The results revealed that for the rising straight and swirl flows, large scale bubbles tend to move to the center, while the small bubble is pushed to the outside or sucked into the space where the large bubbles existed. Moreover, it is found that this method can estimate the rotational component of velocity of the swirl flow as well as measuring the multi-dimensional velocity vector at high temporal resolutions of 0.2 s. (author)

  1. Project Parameter Estimation on the Basis of an Erp Database

    Directory of Open Access Journals (Sweden)

    Relich Marcin

    2013-12-01

    Full Text Available Nowadays, more and more enterprises are using Enterprise Resource Planning (EPR systems that can also be used to plan and control the development of new products. In order to obtain a project schedule, certain parameters (e.g. duration have to be specified in an ERP system. These parameters can be defined by the employees according to their knowledge, or can be estimated on the basis of data from previously completed projects. This paper investigates using an ERP database to identify those variables that have a significant influence on the duration of a project phase. In the paper, a model of knowledge discovery from an ERP database is proposed. The presented method contains four stages of the knowledge discovery process such as data selection, data transformation, data mining and interpretation of patterns in the context of new product development. Among data mining techniques, a fuzzy neural system is chosen to seek relationships on the basis of data from completed projects stored in an ERP system.

  2. Backscatter Analysis Using Multi-Temporal and Multi-Frequency SAR Data in the Context of Flood Mapping at River Saale, Germany

    Directory of Open Access Journals (Sweden)

    Sandro Martinis

    2015-06-01

    Full Text Available In this study, an analysis of multi-temporal and multi-frequency Synthetic Aperture Radar data is performed to investigate the backscatter behavior of various semantic classes in the context of flood mapping in central Europe. The focus is mainly on partially submerged vegetation such as forests and agricultural fields. The test area is located at River Saale, Saxony-Anhalt, Germany, which is covered by a time series of 39 TerraSAR-X data acquired within the time interval December 2009 to June 2013. The data set is supplemented by ALOS PALSAR L-band and RADARSAT-2 C-band data. The time series covers two inundations in January 2011 and June 2013 which allows evaluating backscatter variations between flood periods and normal water level conditions using different radar wavelengths. According to the results, there is potential in detecting flooding beneath vegetation in all microwave wavelengths, even in X-band for sparse vegetation or leaf-off forests.

  3. Do Indonesian Children's Experiences with Large Currency Units Facilitate Magnitude Estimation of Long Temporal Periods?

    Science.gov (United States)

    Cheek, Kim A.

    2017-08-01

    Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.

  4. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-Slim

    2017-01-01

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  5. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain

    2017-01-09

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  6. Integration of multi-temporal airborne and terrestrial laser scanning data for the analysis and modelling of proglacial geomorphodynamic processes

    Science.gov (United States)

    Briese, Christian; Glira, Philipp; Pfeifer, Norbert

    2013-04-01

    The actual on-going and predicted climate change leads in sensitive areas like in high-mountain proglacial regions to significant geomorphodynamic processes (e.g. landslides). Within a short time period (even less than a year) these processes lead to a substantial change of the landscape. In order to study and analyse the recent changes in a proglacial environment the multi-disciplinary research project PROSA (high-resolution measurements of morphodynamics in rapidly changing PROglacial Systems of the Alps) selected the study area of the Gepatschferner (Tyrol), the second largest glacier in Austria. One of the challenges within the project is the geometric integration (i.e. georeferencing) of multi-temporal topographic data sets in a continuously changing environment. Furthermore, one has to deal with data sets of multiple scales (large area data sets vs. highly detailed local area observations) that are on one hand necessary to cover the complete proglacial area with the whole catchment and on the other hand guaranty a highly dense and accurate sampling of individual areas of interest (e.g. a certain highly affected slope). This contribution suggests a comprehensive method for the georeferencing of multi-temporal airborne and terrestrial laser scanning (ALS resp. TLS). It is studied by application to the data that was acquired within the project PROSA. In a first step a stable coordinate frame that allows the analysis of the changing environment has to be defined. Subsequently procedures for the transformation of the individual ALS and TLS data sets into this coordinate frame were developed. This includes the selection of appropriate reference areas as well as the development of special targets for the local TLS acquisition that can be used for the absolute georeferencing in the common coordinate frame. Due to the fact that different TLS instruments can be used (some larger distance sensors that allow covering larger areas vs. closer operating sensors that allow a

  7. BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods

    Science.gov (United States)

    Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.

    2017-12-01

    Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made

  8. Pivot/Remote: a distributed database for remote data entry in multi-center clinical trials.

    Science.gov (United States)

    Higgins, S B; Jiang, K; Plummer, W D; Edens, T R; Stroud, M J; Swindell, B B; Wheeler, A P; Bernard, G R

    1995-01-01

    1. INTRODUCTION. Data collection is a critical component of multi-center clinical trials. Clinical trials conducted in intensive care units (ICU) are even more difficult because the acute nature of illnesses in ICU settings requires that masses of data be collected in a short time. More than a thousand data points are routinely collected for each study patient. The majority of clinical trials are still "paper-based," even if a remote data entry (RDE) system is utilized. The typical RDE system consists of a computer housed in the CC office and connected by modem to a centralized data coordinating center (DCC). Study data must first be recorded on a paper case report form (CRF), transcribed into the RDE system, and transmitted to the DCC. This approach requires additional monitoring since both the paper CRF and study database must be verified. The paper-based RDE system cannot take full advantage of automatic data checking routines. Much of the effort (and expense) of a clinical trial is ensuring that study data matches the original patient data. 2. METHODS. We have developed an RDE system, Pivot/Remote, that eliminates the need for paper-based CRFs. It creates an innovative, distributed database. The database resides partially at the study clinical centers (CC) and at the DCC. Pivot/Remote is descended from technology introduced with Pivot [1]. Study data is collected at the bedside with laptop computers. A graphical user interface (GUI) allows the display of electronic CRFs that closely mimic the normal paper-based forms. Data entry time is the same as for paper CRFs. Pull-down menus, displaying the possible responses, simplify the process of entering data. Edit checks are performed on most data items. For example, entered dates must conform to some temporal logic imposed by the study. Data must conform to some acceptable range of values. Calculations, such as computing the subject's age or the APACHE II score, are automatically made as the data is entered. Data

  9. Estimating the spatial and temporal distribution of species richness within Sequoia and Kings Canyon National Parks.

    Directory of Open Access Journals (Sweden)

    Steve Wathen

    Full Text Available Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians, and plants within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness or declined consistently with increasing elevation (herpetofauna and birds. Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations.

  10. TH-EF-BRA-08: A Novel Technique for Estimating Volumetric Cine MRI (VC-MRI) From Multi-Slice Sparsely Sampled Cine Images Using Motion Modeling and Free Form Deformation

    International Nuclear Information System (INIS)

    Harris, W; Yin, F; Wang, C; Chang, Z; Cai, J; Zhang, Y; Ren, L

    2016-01-01

    Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution of VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI_MM-ROI_FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while

  11. TH-EF-BRA-08: A Novel Technique for Estimating Volumetric Cine MRI (VC-MRI) From Multi-Slice Sparsely Sampled Cine Images Using Motion Modeling and Free Form Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Harris, W; Yin, F; Wang, C; Chang, Z; Cai, J; Zhang, Y; Ren, L [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution of VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while

  12. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    Science.gov (United States)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  13. Spatio-temporal distribution of soil-transmitted helminth infections in Brazil.

    Science.gov (United States)

    Chammartin, Frédérique; Guimarães, Luiz H; Scholte, Ronaldo Gc; Bavia, Mara E; Utzinger, Jürg; Vounatsou, Penelope

    2014-09-18

    In Brazil, preventive chemotherapy targeting soil-transmitted helminthiasis is being scaled-up. Hence, spatially explicit estimates of infection risks providing information about the current situation are needed to guide interventions. Available high-resolution national model-based estimates either rely on analyses of data restricted to a given period of time, or on historical data collected over a longer period. While efforts have been made to take into account the spatial structure of the data in the modelling approach, little emphasis has been placed on the temporal dimension. We extracted georeferenced survey data on the prevalence of infection with soil-transmitted helminths (i.e. Ascaris lumbricoides, hookworm and Trichuris trichiura) in Brazil from the Global Neglected Tropical Diseases (GNTD) database. Selection of the most important predictors of infection risk was carried out using a Bayesian geostatistical approach and temporal models that address non-linearity and correlation of the explanatory variables. The spatial process was estimated through a predictive process approximation. Spatio-temporal models were built on the selected predictors with integrated nested Laplace approximation using stochastic partial differential equations. Our models revealed that, over the past 20 years, the risk of soil-transmitted helminth infection has decreased in Brazil, mainly because of the reduction of A. lumbricoides and hookworm infections. From 2010 onwards, we estimate that the infection prevalences with A. lumbricoides, hookworm and T. trichiura are 3.6%, 1.7% and 1.4%, respectively. We also provide a map highlighting municipalities in need of preventive chemotherapy, based on a predicted soil-transmitted helminth infection risk in excess of 20%. The need for treatments in the school-aged population at the municipality level was estimated at 1.8 million doses of anthelminthic tablets per year. The analysis of the spatio-temporal aspect of the risk of infection

  14. Multi-gas and multi-source comparisons of six land use emission datasets and AFOLU estimates in the Fifth Assessment Report, for the tropics for 2000-2005

    Science.gov (United States)

    Roman-Cuesta, Rosa Maria; Herold, Martin; Rufino, Mariana C.; Rosenstock, Todd S.; Houghton, Richard A.; Rossi, Simone; Butterbach-Bahl, Klaus; Ogle, Stephen; Poulter, Benjamin; Verchot, Louis; Martius, Christopher; de Bruin, Sytze

    2016-10-01

    The Agriculture, Forestry and Other Land Use (AFOLU) sector contributes with ca. 20-25 % of global anthropogenic emissions (2010), making it a key component of any climate change mitigation strategy. AFOLU estimates, however, remain highly uncertain, jeopardizing the mitigation effectiveness of this sector. Comparisons of global AFOLU emissions have shown divergences of up to 25 %, urging for improved understanding of the reasons behind these differences. Here we compare a variety of AFOLU emission datasets and estimates given in the Fifth Assessment Report for the tropics (2000-2005) to identify plausible explanations for the differences in (i) aggregated gross AFOLU emissions, and (ii) disaggregated emissions by sources and gases (CO2, CH4, N2O). We also aim to (iii) identify countries with low agreement among AFOLU datasets to navigate research efforts. The datasets are FAOSTAT (Food and Agriculture Organization of the United Nations, Statistics Division), EDGAR (Emissions Database for Global Atmospheric Research), the newly developed AFOLU "Hotspots", "Houghton", "Baccini", and EPA (US Environmental Protection Agency) datasets. Aggregated gross emissions were similar for all databases for the AFOLU sector: 8.2 (5.5-12.2), 8.4, and 8.0 Pg CO2 eq. yr-1 (for Hotspots, FAOSTAT, and EDGAR respectively), forests reached 6.0 (3.8-10), 5.9, 5.9, and 5.4 Pg CO2 eq. yr-1 (Hotspots, FAOSTAT, EDGAR, and Houghton), and agricultural sectors were with 1.9 (1.5-2.5), 2.5, 2.1, and 2.0 Pg CO2 eq. yr-1 (Hotspots, FAOSTAT, EDGAR, and EPA). However, this agreement was lost when disaggregating the emissions by sources, continents, and gases, particularly for the forest sector, with fire leading the differences. Agricultural emissions were more homogeneous, especially from livestock, while those from croplands were the most diverse. CO2 showed the largest differences among the datasets. Cropland soils and enteric fermentation led to the smaller N2O and CH4 differences. Disagreements

  15. Performance Analysis of Satellite Missions for Multi-Temporal SAR Interferometry.

    Science.gov (United States)

    Bovenga, Fabio; Belmonte, Antonella; Refice, Alberto; Pasquariello, Guido; Nutricato, Raffaele; Nitti, Davide O; Chiaradia, Maria T

    2018-04-27

    Multi-temporal InSAR (MTI) applications pose challenges related to the availability of coherent scattering from the ground surface, the complexity of the ground deformations, atmospheric artifacts, and visibility problems related to ground elevation. Nowadays, several satellite missions are available providing interferometric SAR data at different wavelengths, spatial resolutions, and revisit time. A new and interesting opportunity is provided by Sentinel-1, which has a spatial resolution comparable to that of previous ESA C-band sensors, and revisit times improved by up to 6 days. According to these different SAR space-borne missions, the present work discusses current and future opportunities of MTI applications in terms of ground instability monitoring. Issues related to coherent target detection, mean velocity precision, and product geo-location are addressed through a simple theoretical model assuming backscattering mechanisms related to point scatterers. The paper also presents an example of a multi-sensor ground instability investigation over Lesina Marina, a village in Southern Italy lying over a gypsum diapir, where a hydration process, involving the underlying anhydride, causes a smooth uplift and the formation of scattered sinkholes. More than 20 years of MTI SAR data have been processed, coming from both legacy ERS and ENVISAT missions, and latest-generation RADARSAT-2, COSMO-SkyMed, and Sentinel-1A sensors. Results confirm the presence of a rather steady uplift process, with limited to null variations throughout the whole monitored time-period.

  16. Performance Analysis of Satellite Missions for Multi-Temporal SAR Interferometry

    Directory of Open Access Journals (Sweden)

    Fabio Bovenga

    2018-04-01

    Full Text Available Multi-temporal InSAR (MTI applications pose challenges related to the availability of coherent scattering from the ground surface, the complexity of the ground deformations, atmospheric artifacts, and visibility problems related to ground elevation. Nowadays, several satellite missions are available providing interferometric SAR data at different wavelengths, spatial resolutions, and revisit time. A new and interesting opportunity is provided by Sentinel-1, which has a spatial resolution comparable to that of previous ESA C-band sensors, and revisit times improved by up to 6 days. According to these different SAR space-borne missions, the present work discusses current and future opportunities of MTI applications in terms of ground instability monitoring. Issues related to coherent target detection, mean velocity precision, and product geo-location are addressed through a simple theoretical model assuming backscattering mechanisms related to point scatterers. The paper also presents an example of a multi-sensor ground instability investigation over Lesina Marina, a village in Southern Italy lying over a gypsum diapir, where a hydration process, involving the underlying anhydride, causes a smooth uplift and the formation of scattered sinkholes. More than 20 years of MTI SAR data have been processed, coming from both legacy ERS and ENVISAT missions, and latest-generation RADARSAT-2, COSMO-SkyMed, and Sentinel-1A sensors. Results confirm the presence of a rather steady uplift process, with limited to null variations throughout the whole monitored time-period.

  17. Continuous Estimates of Surface Density and Annual Snow Accumulation with Multi-Channel Snow/Firn Penetrating Radar in the Percolation Zone, Western Greenland Ice Sheet

    Science.gov (United States)

    Meehan, T.; Marshall, H. P.; Bradford, J.; Hawley, R. L.; Osterberg, E. C.; McCarthy, F.; Lewis, G.; Graeter, K.

    2017-12-01

    A priority of ice sheet surface mass balance (SMB) prediction is ascertaining the surface density and annual snow accumulation. These forcing data can be supplied into firn compaction models and used to tune Regional Climate Models (RCM). RCMs do not accurately capture subtle changes in the snow accumulation gradient. Additionally, leading RCMs disagree among each other and with accumulation studies in regions of the Greenland Ice Sheet (GrIS) over large distances and temporal scales. RCMs tend to yield inconsistencies over GrIS because of sparse and outdated validation data in the reanalysis pool. Greenland Traverse for Accumulation and Climate Studies (GreenTrACS) implemented multi-channel 500 MHz Radar in multi-offset configuration throughout two traverse campaigns totaling greater than 3500 km along the western percolation zone of GrIS. The multi-channel radar has the capability of continuously estimating snow depth, average density, and annual snow accumulation, expressed at 95% confidence (+-) 0.15 m, (+-) 17 kgm-3, (+-) 0.04 m w.e. respectively, by examination of the primary reflection return from the previous year's summer surface.

  18. Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data.

    Science.gov (United States)

    Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu

    2002-07-01

    Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.

  19. Using LUCAS topsoil database to estimate soil organic carbon content in local spectral libraries

    Science.gov (United States)

    Castaldi, Fabio; van Wesemael, Bas; Chabrillat, Sabine; Chartin, Caroline

    2017-04-01

    The quantification of the soil organic carbon (SOC) content over large areas is mandatory to obtain accurate soil characterization and classification, which can improve site specific management at local or regional scale exploiting the strong relationship between SOC and crop growth. The estimation of the SOC is not only important for agricultural purposes: in recent years, the increasing attention towards global warming highlighted the crucial role of the soil in the global carbon cycle. In this context, soil spectroscopy is a well consolidated and widespread method to estimate soil variables exploiting the interaction between chromophores and electromagnetic radiation. The importance of spectroscopy in soil science is reflected by the increasing number of large soil spectral libraries collected in the world. These large libraries contain soil samples derived from a consistent number of pedological regions and thus from different parent material and soil types; this heterogeneity entails, in turn, a large variability in terms of mineralogical and organic composition. In the light of the huge variability of the spectral responses to SOC content and composition, a rigorous classification process is necessary to subset large spectral libraries and to avoid the calibration of global models failing to predict local variation in SOC content. In this regard, this study proposes a method to subset the European LUCAS topsoil database into soil classes using a clustering analysis based on a large number of soil properties. The LUCAS database was chosen to apply a standardized multivariate calibration approach valid for large areas without the need for extensive field and laboratory work for calibration of local models. Seven soil classes were detected by the clustering analyses and the samples belonging to each class were used to calibrate specific partial least square regression (PLSR) models to estimate SOC content of three local libraries collected in Belgium (Loam belt

  20. Scene depth estimation using a moving camera

    International Nuclear Information System (INIS)

    Sune, Jean-Luc

    1995-01-01

    This thesis presents a solution of the depth-from-motion problem. The movement of the monocular observer is known. We have focused our research on a direct method which avoid the optical flow estimation required by classical approaches. The direct application of this method is not exploitable. We need to define a validity domain to extract the set of image points where it is possible to get a correct depth value. Also, we use a multi-scale approach to improve the derivatives estimation. The depth estimation for a given scale is obtained by the minimisation of an energy function established in the context of statistic regularization. A fusion operator, merging the various spatial and temporal scales, has been used to estimate the final depth map. A correction-prediction schema is used to integrate the temporal information from an image sequence. The predicted depth map is considered as an additional observation and integrated in the fusion process. At each time, an error depth map is associated to the estimated depth map. (author) [fr

  1. LAGOS-NE: a multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of US lakes

    Science.gov (United States)

    Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Christel, Samuel T; Claucherty, Matt; Conroy, Joseph D; Downing, John A; Dukett, Jed; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai

    2017-01-01

    Abstract Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states. LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. PMID:29053868

  2. LAGOS-NE: a multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of US lakes.

    Science.gov (United States)

    Soranno, Patricia A; Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Cheruvelil, Kendra S; Christel, Samuel T; Claucherty, Matt; Collins, Sarah M; Conroy, Joseph D; Downing, John A; Dukett, Jed; Fergus, C Emi; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Skaff, Nicholas K; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Tan, Pang-Ning; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai

    2017-12-01

    Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600-12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. © The Author 2017. Published by Oxford University Press.

  3. LAGOS-NE: a multi-scaled geospatial and temporal database of lake ecological context and water quality for thousands of US lakes

    Science.gov (United States)

    Soranno, Patricia A.; Bacon, Linda C.; Beauchene, Michael; Bednar, Karen E.; Bissell, Edward G.; Boudreau, Claire K.; Boyer, Marvin G.; Bremigan, Mary T.; Carpenter, Stephen R.; Carr, Jamie W.; Cheruvelil, Kendra S.; Christel, Samuel T.; Claucherty, Matt; Collins, Sarah M.; Conroy, Joseph D.; Downing, John A.; Dukett, Jed; Fergus, C. Emi; Filstrup, Christopher T.; Funk, Clara; Gonzalez, Maria J.; Green, Linda T.; Gries, Corinna; Halfman, John D.; Hamilton, Stephen K.; Hanson, Paul C.; Henry, Emily N.; Herron, Elizabeth M.; Hockings, Celeste; Jackson, James R.; Jacobson-Hedin, Kari; Janus, Lorraine L.; Jones, William W.; Jones, John R.; Keson, Caroline M.; King, Katelyn B.S.; Kishbaugh, Scott A.; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A.; Lee, Yuehlin; Lottig, Noah R.; Lynch, Jason A.; Matthews, Leslie J.; McDowell, William H.; Moore, Karen E.B.; Neff, Brian; Nelson, Sarah J.; Oliver, Samantha K.; Pace, Michael L.; Pierson, Donald C.; Poisson, Autumn C.; Pollard, Amina I.; Post, David M.; Reyes, Paul O.; Rosenberry, Donald; Roy, Karen M.; Rudstam, Lars G.; Sarnelle, Orlando; Schuldt, Nancy J.; Scott, Caren E.; Skaff, Nicholas K.; Smith, Nicole J.; Spinelli, Nick R.; Stachelek, Joseph J.; Stanley, Emily H.; Stoddard, John L.; Stopyak, Scott B.; Stow, Craig A.; Tallant, Jason M.; Tan, Pang-Ning; Thorpe, Anthony P.; Vanni, Michael J.; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C.; Webster, Katherine E.; White, Jeffrey D.; Wilmes, Marcy K.; Yuan, Shuai

    2017-01-01

    Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales.

  4. Multi-Level Interval Estimation for Locating damage in Structures by Using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Pan Danguang; Gao Yanhua; Song Junlei

    2010-01-01

    A new analysis technique, called multi-level interval estimation method, is developed for locating damage in structures. In this method, the artificial neural networks (ANN) analysis method is combined with the statistics theory to estimate the range of damage location. The ANN is multilayer perceptron trained by back-propagation. Natural frequencies and modal shape at a few selected points are used as input to identify the location and severity of damage. Considering the large-scale structures which have lots of elements, multi-level interval estimation method is developed to reduce the estimation range of damage location step-by-step. Every step, estimation range of damage location is obtained from the output of ANN by using the method of interval estimation. The next ANN training cases are selected from the estimation range after linear transform, and the output of new ANN estimation range of damage location will gained a reduced estimation range. Two numerical example analyses on 10-bar truss and 100-bar truss are presented to demonstrate the effectiveness of the proposed method.

  5. Estimation of Alignment and Transverse Load in Multi-Bearing Rotor System

    OpenAIRE

    Tom J. Chalko; Dong-Xu Li

    1997-01-01

    The paper presents a method for estimation of a multi-bearing machine alignment on the basis of measured eccentricities of the shaft in machine bearings. The method uses a linear FEM model of the rotor and the non-linear models of machine bearings. In the presented example, the non-linear models of hydrodynamic bearings are used, but it is shown, that the method could be easily applied to other types of bearings. In addition to the alignment estimation, the method allows to estimate the unkno...

  6. Subsidence Monitoring in Seville (S Spain) Using Multi-Temporal InSAR

    Science.gov (United States)

    Ruiz-Armenteros, Antonio M.; Ruiz-Constan, Ana; Lamas-Fernandez, Francisco; Galindo-Zaldivar, Jesus; Sousa, Joaquim J.; Sanz de Galdeano, Carlos; Delgado, Manuel J.; Pedrera-Parias, Antonio; Martos-Rosillo, Sergio; Gil, Antonio J.; Caro-Cuenca, Miguel; Hanssen, Ramon F.

    2016-08-01

    Seville, with a metropolitan population of about 1.5 million, is the capital and largest city of Andalusia (S Spain). It is the 30th most populous municipality in the European Union and contains three UNESCO World Heritage Sites. The Seville harbour, located about 80 km from the Atlantic Ocean, is the only river port in Spain. The city is located on the plain of the Guadalquivir River. Using Multi-Temporal InSAR with ERS-1/2 and Envisat data a subsidence behavior is detected in the period 1992-2010. The geometry of the subsiding areas suggests that it should be conditioned by the fluvial dynamics of the Guadalquivir River and its tributaries. Facies distribution along the fluvial system (paleochannels, flood plains...), with different grain size and matrix proportion, may explain the relative subsidence between the different sectors.

  7. Data management and data analysis techniques in pharmacoepidemiological studies using a pre-planned multi-database approach : a systematic literature review

    NARCIS (Netherlands)

    Bazelier, Marloes T; Eriksson, Irene; de Vries, Frank; Schmidt, Marjanka K; Raitanen, Jani; Haukka, Jari; Starup-Linde, Jakob; De Bruin, Marie L; Andersen, Morten

    2015-01-01

    PURPOSE: To identify pharmacoepidemiological multi-database studies and to describe data management and data analysis techniques used for combining data. METHODS: Systematic literature searches were conducted in PubMed and Embase complemented by a manual literature search. We included

  8. Crop classification based on multi-temporal satellite remote sensing data for agro-advisory services

    Science.gov (United States)

    Karale, Yogita; Mohite, Jayant; Jagyasi, Bhushan

    2014-11-01

    In this paper, we envision the use of satellite images coupled with GIS to obtain location specific crop type information in order to disseminate crop specific advises to the farmers. In our ongoing mKRISHI R project, the accurate information about the field level crop type and acreage will help in the agro-advisory services and supply chain planning and management. The key contribution of this paper is the field level crop classification using multi temporal images of Landsat-8 acquired during November 2013 to April 2014. The study area chosen is Vani, Maharashtra, India, from where the field level ground truth information for various crops such as grape, wheat, onion, soybean, tomato, along with fodder and fallow fields has been collected using the mobile application. The ground truth information includes crop type, crop stage and GPS location for 104 farms in the study area with approximate area of 42 hectares. The seven multi-temporal images of the Landsat-8 were used to compute the vegetation indices namely: Normalized Difference Vegetation Index (NDVI), Simple Ratio (SR) and Difference Vegetation Index (DVI) for the study area. The vegetation indices values of the pixels within a field were then averaged to obtain the field level vegetation indices. For each crop, binary classification has been carried out using the feed forward neural network operating on the field level vegetation indices. The classification accuracy for the individual crop was in the range of 74.5% to 97.5% and the overall classification accuracy was found to be 88.49%.

  9. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  10. Multi-Dimensional Bitmap Indices for Optimising Data Access within Object Oriented Databases at CERN

    CERN Document Server

    Stockinger, K

    2001-01-01

    Efficient query processing in high-dimensional search spaces is an important requirement for many analysis tools. In the literature on index data structures one can find a wide range of methods for optimising database access. In particular, bitmap indices have recently gained substantial popularity in data warehouse applications with large amounts of read mostly data. Bitmap indices are implemented in various commercial database products and are used for querying typical business applications. However, scientific data that is mostly characterised by non-discrete attribute values cannot be queried efficiently by the techniques currently supported. In this thesis we propose a novel access method based on bitmap indices that efficiently handles multi-dimensional queries against typical scientific data. The algorithm is called GenericRangeEval and is an extension of a bitmap index for discrete attribute values. By means of a cost model we study the performance of queries with various selectivities against uniform...

  11. Human Pose Estimation and Activity Recognition from Multi-View Videos

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Tran, Cuong; Trivedi, Mohan

    2012-01-01

    approaches which have been proposed to comply with these requirements. We report a comparison of the most promising methods for multi-view human action recognition using two publicly available datasets: the INRIA Xmas Motion Acquisition Sequences (IXMAS) Multi-View Human Action Dataset, and the i3DPost Multi......–computer interaction (HCI), assisted living, gesture-based interactive games, intelligent driver assistance systems, movies, 3D TV and animation, physical therapy, autonomous mental development, smart environments, sport motion analysis, video surveillance, and video annotation. Next, we review and categorize recent......-View Human Action and Interaction Dataset. To compare the proposed methods, we give a qualitative assessment of methods which cannot be compared quantitatively, and analyze some prominent 3D pose estimation techniques for application, where not only the performed action needs to be identified but a more...

  12. Performance of the Multi-Radar Multi-Sensor System over the Lower Colorado River, Texas

    Science.gov (United States)

    Bayabil, H. K.; Sharif, H. O.; Fares, A.; Awal, R.; Risch, E.

    2017-12-01

    Recently observed increases in intensities and frequencies of climate extremes (e.g., floods, dam failure, and overtopping of river banks) necessitate the development of effective disaster prevention and mitigation strategies. Hydrologic models can be useful tools in predicting such events at different spatial and temporal scales. However, accuracy and prediction capability of such models are often constrained by the availability of high-quality representative hydro-meteorological data (e.g., precipitation) that are required to calibrate and validate such models. Improved technologies and products such as the Multi-Radar Multi-Sensor (MRMS) system that allows gathering and transmission of vast meteorological data have been developed to provide such data needs. While the MRMS data are available with high spatial and temporal resolutions (1 km and 15 min, respectively), its accuracy in estimating precipitation is yet to be fully investigated. Therefore, the main objective of this study is to evaluate the performance of the MRMS system in effectively capturing precipitation over the Lower Colorado River, Texas using observations from a dense rain gauge network. In addition, effects of spatial and temporal aggregation scales on the performance of the MRMS system were evaluated. Point scale comparisons were made at 215 gauging locations using rain gauges and MRMS data from May 2015. Moreover, the effects of temporal and spatial data aggregation scales (30, 45, 60, 75, 90, 105, and 120 min) and (4 to 50 km), respectively on the performance of the MRMS system were tested. Overall, the MRMS system (at 15 min temporal resolution) captured precipitation reasonably well, with an average R2 value of 0.65 and RMSE of 0.5 mm. In addition, spatial and temporal data aggregations resulted in increases in R2 values. However, reduction in RMSE was achieved only with an increase in spatial aggregations.

  13. Continuous Estimation of Human Multi-Joint Angles From sEMG Using a State-Space Model.

    Science.gov (United States)

    Ding, Qichuan; Han, Jianda; Zhao, Xingang

    2017-09-01

    Due to the couplings among joint-relative muscles, it is a challenge to accurately estimate continuous multi-joint movements from multi-channel sEMG signals. Traditional approaches always build a nonlinear regression model, such as artificial neural network, to predict the multi-joint movement variables using sEMG as inputs. However, the redundant sEMG-data are always not distinguished; the prediction errors cannot be evaluated and corrected online as well. In this work, a correlation-based redundancy-segmentation method is proposed to segment the sEMG-vector including redundancy into irredundant and redundant subvectors. Then, a general state-space framework is developed to build the motion model by regarding the irredundant subvector as input and the redundant one as measurement output. With the built state-space motion model, a closed-loop prediction-correction algorithm, i.e., the unscented Kalman filter (UKF), can be employed to estimate the multi-joint angles from sEMG, where the redundant sEMG-data are used to reject model uncertainties. After having fully employed the redundancy, the proposed method can provide accurate and smooth estimation results. Comprehensive experiments are conducted on the multi-joint movements of the upper limb. The maximum RMSE of the estimations obtained by the proposed method is 0.16±0.03, which is significantly less than 0.25±0.06 and 0.27±0.07 (p < 0.05) obtained by common neural networks.

  14. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Science.gov (United States)

    Zhu, Aichun; Wang, Tian; Snoussi, Hichem

    2018-03-01

    This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  15. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Directory of Open Access Journals (Sweden)

    Aichun Zhu

    2018-03-01

    Full Text Available This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN. Firstly, a Relative Mixture Deformable Model (RMDM is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  16. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    Science.gov (United States)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  17. Historical wetlands mapping and GIS processing for the Savannah River Site Database

    International Nuclear Information System (INIS)

    Christel, L.M.

    1994-08-01

    New policies regarding the open-quotes no net lossclose quotes of wetlands have presented resource managers and GIS analysts with a challenging ecological application. Historical aerial photography provides a temporal record of conditions over time, beneficial when appraising wetland gain and loss because man-made disturbances can have both short and long term effects on wetland communities. This is particularly true when assessing existing communities for restoration and reclamation of the ecologic structure and function of the community prior to a disturbance. Remediation efforts can be optimized when definitive documentation exists of the original communities. The Geographic Information System (GIS) is a powerful tool for integrating these data sets and performing spatial and temporal analyses in support of ecological applications. On the Savannah River Site (SRS) temporal analysis of multispectral scanner data has shown where wetlands were impacted by reactor operation, such as thermal discharge into creeks and swamps, and where wetlands were removed due to the construction of facilities. The GIS database was used to determine how the distribution and composition of wetland classes have changed over time. Historic black and white aerial photography of SRS as well as color infrared aerial photography as recent as 1989was used to develop a more current land cover database. Six wetland classes were photointerpreted. The historical data layer was then used in spatial analyses to aid in deriving potential viable and cost effective management technique alternatives for remediation of wetlands influenced by past reactor operations and has provided acreage estimates of wetlands lost. Acreage values can be used to estimate the potential costs of wetland remediation. This application of temporal analysis using a GIS demonstrates the utility of documenting prior conditions before remediation actually commences and how to maximize cost effective remediation efforts

  18. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    OpenAIRE

    Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reportin...

  19. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  20. Action Recognition by Joint Spatial-Temporal Motion Feature

    Directory of Open Access Journals (Sweden)

    Weihua Zhang

    2013-01-01

    Full Text Available This paper introduces a method for human action recognition based on optical flow motion features extraction. Automatic spatial and temporal alignments are combined together in order to encourage the temporal consistence on each action by an enhanced dynamic time warping (DTW algorithm. At the same time, a fast method based on coarse-to-fine DTW constraint to improve computational performance without reducing accuracy is induced. The main contributions of this study include (1 a joint spatial-temporal multiresolution optical flow computation method which can keep encoding more informative motion information than recent proposed methods, (2 an enhanced DTW method to improve temporal consistence of motion in action recognition, and (3 coarse-to-fine DTW constraint on motion features pyramids to speed up recognition performance. Using this method, high recognition accuracy is achieved on different action databases like Weizmann database and KTH database.

  1. Forest parameter estimation using polarimetric SAR interferometry techniques at low frequencies

    International Nuclear Information System (INIS)

    Lee, Seung-Kuk

    2013-01-01

    Polarimetric Synthetic Aperture Radar Interferometry (Pol-InSAR) is an active radar remote sensing technique based on the coherent combination of both polarimetric and interferometric observables. The Pol-InSAR technique provided a step forward in quantitative forest parameter estimation. In the last decade, airborne SAR experiments evaluated the potential of Pol-InSAR techniques to estimate forest parameters (e.g., the forest height and biomass) with high accuracy over various local forest test sites. This dissertation addresses the actual status, potentials and limitations of Pol-InSAR inversion techniques for 3-D forest parameter estimations on a global scale using lower frequencies such as L- and P-band. The multi-baseline Pol-InSAR inversion technique is applied to optimize the performance with respect to the actual level of the vertical wave number and to mitigate the impact of temporal decorrelation on the Pol-InSAR forest parameter inversion. Temporal decorrelation is a critical issue for successful Pol-InSAR inversion in the case of repeat-pass Pol-InSAR data, as provided by conventional satellites or airborne SAR systems. Despite the limiting impact of temporal decorrelation in Pol-InSAR inversion, it remains a poorly understood factor in forest height inversion. Therefore, the main goal of this dissertation is to provide a quantitative estimation of the temporal decorrelation effects by using multi-baseline Pol-InSAR data. A new approach to quantify the different temporal decorrelation components is proposed and discussed. Temporal decorrelation coefficients are estimated for temporal baselines ranging from 10 minutes to 54 days and are converted to height inversion errors. In addition, the potential of Pol-InSAR forest parameter estimation techniques is addressed and projected onto future spaceborne system configurations and mission scenarios (Tandem-L and BIOMASS satellite missions at L- and P-band). The impact of the system parameters (e.g., bandwidth

  2. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    Science.gov (United States)

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges

  3. Estimation of canopy carotenoid content of winter wheat using multi-angle hyperspectral data

    Science.gov (United States)

    Kong, Weiping; Huang, Wenjiang; Liu, Jiangui; Chen, Pengfei; Qin, Qiming; Ye, Huichun; Peng, Dailiang; Dong, Yingying; Mortimer, A. Hugh

    2017-11-01

    Precise estimation of carotenoid (Car) content in crops, using remote sensing data, could be helpful for agricultural resources management. Conventional methods for Car content estimation were mostly based on reflectance data acquired from nadir direction. However, reflectance acquired at this direction is highly influenced by canopy structure and soil background reflectance. Off-nadir observation is less impacted, and multi-angle viewing data are proven to contain additional information rarely exploited for crop Car content estimation. The objective of this study was to explore the potential of multi-angle observation data for winter wheat canopy Car content estimation. Canopy spectral reflectance was measured from nadir as well as from a series of off-nadir directions during different growing stages of winter wheat, with concurrent canopy Car content measurements. Correlation analyses were performed between Car content and the original and continuum removed spectral reflectance. Spectral features and previously published indices were derived from data obtained at different viewing angles and were tested for Car content estimation. Results showed that spectral features and indices obtained from backscattering directions between 20° and 40° view zenith angle had a stronger correlation with Car content than that from the nadir direction, and the strongest correlation was observed from about 30° backscattering direction. Spectral absorption depth at 500 nm derived from spectral data obtained from 30° backscattering direction was found to reduce the difference induced by plant cultivars greatly. It was the most suitable for winter wheat canopy Car estimation, with a coefficient of determination 0.79 and a root mean square error of 19.03 mg/m2. This work indicates the importance of taking viewing geometry effect into account when using spectral features/indices and provides new insight in the application of multi-angle remote sensing for the estimation of crop

  4. Timeliness and Predictability in Real-Time Database Systems

    National Research Council Canada - National Science Library

    Son, Sang H

    1998-01-01

    The confluence of computers, communications, and databases is quickly creating a globally distributed database where many applications require real time access to both temporally accurate and multimedia data...

  5. Cardiovascular safety of vildagliptin in patients with type 2 diabetes: A European multi-database, non-interventional post-authorization safety study.

    Science.gov (United States)

    Williams, Rachael; de Vries, Frank; Kothny, Wolfgang; Serban, Carmen; Lopez-Leon, Sandra; Chu, Changan; Schlienger, Raymond

    2017-10-01

    The aim of this non-interventional, multi-database, analytical cohort study was to assess the cardiovascular (CV) safety of vildagliptin vs other non-insulin antidiabetic drugs (NIADs) using real-world data from 5 European electronic healthcare databases. Patients with type 2 diabetes aged ≥18 years on NIAD treatment were enrolled. Adjusted incidence rate ratios (IRRs) and 95% confidence intervals (CIs) for the outcomes of interest (myocardial infarction [MI], acute coronary syndrome [ACS], stroke, congestive heart failure [CHF], individually and as a composite) were estimated using negative binomial regression. Approximately 2.8% of the enrolled patients (n = 738 054) used vildagliptin at any time during the study, with an average follow-up time of 1.4 years, resulting in a cumulative current vildagliptin exposure of 28 330 person-years. The adjusted IRRs (vildagliptin [±other NIADs] vs other NIADs) were in the range of 0.61 to 0.97 (MI), 0.55 to 1.60 (ACS), 0.02 to 0.77 (stroke), 0.49 to 1.03 (CHF), and 0.22 to 1.02 (composite CV outcomes). The IRRs and their 95% CIs were close to 1, demonstrating no increased risk of adverse CV events, including the risk of CHF, with vildagliptin vs other NIADs in real-world conditions. © 2017 Crown copyright. Diabetes, Obesity and Metabolism © 2017 John Wiley & Sons Ltd.

  6. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  7. A New and Simple Method for Crosstalk Estimation in Homogeneous Trench-Assisted Multi-Core Fibers

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2014-01-01

    A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers.......A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers....

  8. Optimal Weighting of Multi-Spacecraft Data to Estimate Gradients of Physical Fields

    Science.gov (United States)

    Chanteur, G. M.; Le Contel, O.; Sahraoui, F.; Retino, A.; Mirioni, L.

    2016-12-01

    Multi-spacecraft missions like the ESA mission CLUSTER and the NASA mission MMS are essential to improve our understanding of physical processes in space plasmas. Several methods were designed in the 90's during the preparation phase of the CLUSTER mission to estimate gradients of physical fields from simultaneous multi-points measurements [1, 2]. Both CLUSTER and MMS involve four spacecraft with identical full scientific payloads including various sensors of electromagnetic fields and different type of particle detectors. In the standard methods described in [1, 2], which are presently in use, data from the four spacecraft have identical weights and the estimated gradients are most reliable when the tetrahedron formed by the four spacecraft is regular. There are three types of errors affecting the estimated gradients (see chapter 14 in [1]) : i) truncature errors are due to local non-linearity of spatial variations, ii) physical errors are due to instruments, and iii) geometrical errors are due to uncertainties on the positions of the spacecraft. An assessment of truncature errors for a given observation requires a theoretical model of the measured field. Instrumental errors can easily be taken into account for a given geometry of the cluster but are usually less than the geometrical errors which diverge quite fast when the tetrahedron flattens, a circumstance occurring twice per orbit of the cluster. Hence reliable gradients can be estimated only on part of the orbit. Reciprocal vectors of the tetrahedron were presented in chapter 4 of [1], they have the advantage over other methods to treat the four spacecraft symmetrically and to allow a theoretical analysis of the errors (see chapters 4 of [1] and 4 of [2]). We will present Generalized Reciprocal Vectors for weighted data and an optimization procedure to improve the reliability of the estimated gradients when the tetrahedron is not regular. A brief example using CLUSTER or MMS data will be given. This approach

  9. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data

    KAUST Repository

    Castruccio, Stefano

    2018-01-23

    Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)—coarser or larger spatial units—rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke.

  10. Temporal analysis of vegetation indices related to biophysical parameters using Sentinel 2A images to estimate maize production

    Science.gov (United States)

    Macedo, Lucas Saran; Kawakubo, Fernando Shinji

    2017-10-01

    Agricultural production is one of the most important Brazilian economic activities accounting for about 21,5% of total Gross Domestic Product. In this scenario, the use of satellite images for estimating biophysical parameters along the phenological development of agricultural crops allows the conclusion about the sanity of planting and helps the projection on design production trends. The objective of this study is to analyze the temporal patterns and variation of six vegetion indexes obtained from the bands of Sentinel 2A satellite, associated with greenness (NDVI and ClRE), senescence (mARI and PSRI) and water content (DSWI and NDWI) to estimate maize production. The temporal pattern of the indices was analyzed in function of productivity data collected in-situ. The results obtained evidenced the importance of the SWIR and Red Edge ranges with Pearson correlation values of the temporal mean for NDWI 0.88 and 0.76 for CLRE.

  11. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    Science.gov (United States)

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  12. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  13. An axiomatic approach to the estimation of interval-valued preferences in multi-criteria decision modeling

    DEFF Research Database (Denmark)

    Franco de los Ríos, Camilo; Hougaard, Jens Leth; Nielsen, Kurt

    In this paper we explore multi-dimensional preference estimation from imprecise (interval) data. Focusing on different multi-criteria decision models, such as PROMETHEE, ELECTRE, TOPSIS or VIKOR, and their extensions dealing with imprecise data, preference modeling is examined with respect...

  14. NLCD 2011 database

    Data.gov (United States)

    U.S. Environmental Protection Agency — National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium....

  15. State estimation of spatio-temporal phenomena

    Science.gov (United States)

    Yu, Dan

    This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input

  16. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer.

    Science.gov (United States)

    Fetterly, Kenneth A; Favazza, Christopher P

    2016-08-07

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the

  17. A Foundation for Vacuuming Temporal Databases

    DEFF Research Database (Denmark)

    Skyt, Janne; Jensen, Christian Søndergaard; Mark, L.

    2003-01-01

    A wide range of real-world database applications, including financial and medical applications, are faced with accountability and traceability requirements. These requirements lead to the replacement of the usual update-in-place policy by an append-only policy that retain all previous states...... only received little attention. Such vacuuming is called for by, e.g., the laws of many countries and the policies of many businesses. Although necessary, with vacuuming, the database’s perfect recollection of the past may be compromised via, e.g., selective removal of records pertaining to past states...

  18. Deriving temporally continuous soil moisture estimations at fine resolution by downscaling remotely sensed product

    Science.gov (United States)

    Jin, Yan; Ge, Yong; Wang, Jianghao; Heuvelink, Gerard B. M.

    2018-06-01

    Land surface soil moisture (SSM) has important roles in the energy balance of the land surface and in the water cycle. Downscaling of coarse-resolution SSM remote sensing products is an efficient way for producing fine-resolution data. However, the downscaling methods used most widely require full-coverage visible/infrared satellite data as ancillary information. These methods are restricted to cloud-free days, making them unsuitable for continuous monitoring. The purpose of this study is to overcome this limitation to obtain temporally continuous fine-resolution SSM estimations. The local spatial heterogeneities of SSM and multiscale ancillary variables were considered in the downscaling process both to solve the problem of the strong variability of SSM and to benefit from the fusion of ancillary information. The generation of continuous downscaled remote sensing data was achieved via two principal steps. For cloud-free days, a stepwise hybrid geostatistical downscaling approach, based on geographically weighted area-to-area regression kriging (GWATARK), was employed by combining multiscale ancillary variables with passive microwave remote sensing data. Then, the GWATARK-estimated SSM and China Soil Moisture Dataset from Microwave Data Assimilation SSM data were combined to estimate fine-resolution data for cloudy days. The developed methodology was validated by application to the 25-km resolution daily AMSR-E SSM product to produce continuous SSM estimations at 1-km resolution over the Tibetan Plateau. In comparison with ground-based observations, the downscaled estimations showed correlation (R ≥ 0.7) for both ascending and descending overpasses. The analysis indicated the high potential of the proposed approach for producing a temporally continuous SSM product at fine spatial resolution.

  19. A study of estimating cutting depth for multi-pass nanoscale cutting by using atomic force microscopy

    International Nuclear Information System (INIS)

    Lin, Zone-Ching; Hsu, Ying-Chih

    2012-01-01

    This paper studies two models for estimating cutting depth of multi-pass nanoscale cutting by using an atomic force microscopy (AFM) probe. One estimates cutting depth for multi-pass nanoscale cutting by using regression equations of nanoscale contact pressure factor (NCP factor) while the other uses equation of specific down force energy (SDFE). This paper proposes taking a diamond-coated probe of AFM as the cutting tool to carry out multi-pass nanoscale cutting experiments on the surface of sapphire substrate. In the process of experimentation, different down forces are set, and the probe shape of AFM is known, then using each down force to multi-pass cutting the sapphire substrate. From the measured experimental data of a central cutting depth of the machining groove by AFM, this paper calculates the specific down force energy of each down force. The experiment results reveal that the specific down force energy of each case of multi-pass nanoscale cutting for different down forces under a probe of AFM is close to a constant value. This paper also compares the nanoscale cutting results from estimating cutting depths for each pass of multi-pass among the experimental results and the calculating results obtained by the two theories models. It is found that the model of specific down force energy can calculate cutting depths for each nanoscale cutting pass by one equation. It is easier to use than the multi-regression equations of the nanoscale contact pressure factor. Besides, the estimations of cutting depth results obtained by the model of specific down force energy are closer to that of the experiment results. It shows that the proposed specific down force energy model in this paper is an acceptable model.

  20. Advances in knowledge discovery in databases

    CERN Document Server

    Adhikari, Animesh

    2015-01-01

    This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.  

  1. A Relational Encoding of a Conceptual Model with Multiple Temporal Dimensions

    Science.gov (United States)

    Gubiani, Donatella; Montanari, Angelo

    The theoretical interest and the practical relevance of a systematic treatment of multiple temporal dimensions is widely recognized in the database and information system communities. Nevertheless, most relational databases have no temporal support at all. A few of them provide a limited support, in terms of temporal data types and predicates, constructors, and functions for the management of time values (borrowed from the SQL standard). One (resp., two) temporal dimensions are supported by historical and transaction-time (resp., bitemporal) databases only. In this paper, we provide a relational encoding of a conceptual model featuring four temporal dimensions, namely, the classical valid and transaction times, plus the event and availability times. We focus our attention on the distinctive technical features of the proposed temporal extension of the relation model. In the last part of the paper, we briefly show how to implement it in a standard DBMS.

  2. Online state of charge and model parameter co-estimation based on a novel multi-timescale estimator for vanadium redox flow battery

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet

    2016-01-01

    Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.

  3. Dynameomics: a multi-dimensional analysis-optimized database for dynamic protein data.

    Science.gov (United States)

    Kehl, Catherine; Simms, Andrew M; Toofanny, Rudesh D; Daggett, Valerie

    2008-06-01

    The Dynameomics project is our effort to characterize the native-state dynamics and folding/unfolding pathways of representatives of all known protein folds by way of molecular dynamics simulations, as described by Beck et al. (in Protein Eng. Des. Select., the first paper in this series). The data produced by these simulations are highly multidimensional in structure and multi-terabytes in size. Both of these features present significant challenges for storage, retrieval and analysis. For optimal data modeling and flexibility, we needed a platform that supported both multidimensional indices and hierarchical relationships between related types of data and that could be integrated within our data warehouse, as described in the accompanying paper directly preceding this one. For these reasons, we have chosen On-line Analytical Processing (OLAP), a multi-dimensional analysis optimized database, as an analytical platform for these data. OLAP is a mature technology in the financial sector, but it has not been used extensively for scientific analysis. Our project is further more unusual for its focus on the multidimensional and analytical capabilities of OLAP rather than its aggregation capacities. The dimensional data model and hierarchies are very flexible. The query language is concise for complex analysis and rapid data retrieval. OLAP shows great promise for the dynamic protein analysis for bioengineering and biomedical applications. In addition, OLAP may have similar potential for other scientific and engineering applications involving large and complex datasets.

  4. Simulation of anthropogenic CO2 uptake in the CCSM3.1 ocean circulation-biogeochemical model: comparison with data-based estimates

    Directory of Open Access Journals (Sweden)

    S. Khatiwala

    2012-04-01

    Full Text Available The global ocean has taken up a large fraction of the CO2 released by human activities since the industrial revolution. Quantifying the oceanic anthropogenic carbon (Cant inventory and its variability is important for predicting the future global carbon cycle. The detailed comparison of data-based and model-based estimates is essential for the validation and continued improvement of our prediction capabilities. So far, three global estimates of oceanic Cant inventory that are "data-based" and independent of global ocean circulation models have been produced: one based on the Δ C* method, and two that are based on constraining surface-to-interior transport of tracers, the TTD method and a maximum entropy inversion method (GF. The GF method, in particular, is capable of reconstructing the history of Cant inventory through the industrial era. In the present study we use forward model simulations of the Community Climate System Model (CCSM3.1 to estimate the Cant inventory and compare the results with the data-based estimates. We also use the simulations to test several assumptions of the GF method, including the assumption of constant climate and circulation, which is common to all the data-based estimates. Though the integrated estimates of global Cant inventories are consistent with each other, the regional estimates show discrepancies up to 50 %. The CCSM3 model underestimates the total Cant inventory, in part due to weak mixing and ventilation in the North Atlantic and Southern Ocean. Analyses of different simulation results suggest that key assumptions about ocean circulation and air-sea disequilibrium in the GF method are generally valid on the global scale, but may introduce errors in Cant estimates on regional scales. The GF method should also be used with caution when predicting future oceanic anthropogenic carbon uptake.

  5. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    Science.gov (United States)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  6. Multi-temporal clustering of continental floods and associated atmospheric circulations

    Science.gov (United States)

    Liu, Jianyu; Zhang, Yongqiang

    2017-12-01

    Investigating clustering of floods has important social, economic and ecological implications. This study examines the clustering of Australian floods at different temporal scales and its possible physical mechanisms. Flood series with different severities are obtained by peaks-over-threshold (POT) sampling in four flood thresholds. At intra-annual scale, Cox regression and monthly frequency methods are used to examine whether and when the flood clustering exists, respectively. At inter-annual scale, dispersion indices with four-time variation windows are applied to investigate the inter-annual flood clustering and its variation. Furthermore, the Kernel occurrence rate estimate and bootstrap resampling methods are used to identify flood-rich/flood-poor periods. Finally, seasonal variation of horizontal wind at 850 hPa and vertical wind velocity at 500 hPa are used to investigate the possible mechanisms causing the temporal flood clustering. Our results show that: (1) flood occurrences exhibit clustering at intra-annual scale, which are regulated by climate indices representing the impacts of the Pacific and Indian Oceans; (2) the flood-rich months occur from January to March over northern Australia, and from July to September over southwestern and southeastern Australia; (3) stronger inter-annual clustering takes place across southern Australia than northern Australia; and (4) Australian floods are characterised by regional flood-rich and flood-poor periods, with 1987-1992 identified as the flood-rich period across southern Australia, but the flood-poor period across northern Australia, and 2001-2006 being the flood-poor period across most regions of Australia. The intra-annual and inter-annual clustering and temporal variation of flood occurrences are in accordance with the variation of atmospheric circulation. These results provide relevant information for flood management under the influence of climate variability, and, therefore, are helpful for developing

  7. Comparative analysis of MR imaging, Ictal SPECT and EEG in temporal lobe epilepsy: a prospective IAEA multi-center study

    Energy Technology Data Exchange (ETDEWEB)

    Zaknun, John J. [University Hospital of Innsbruck, Department of Nuclear Medicine, Innsbruck (Austria); International Atomic Energy Agency (IAEA), Nuclear Medicine Section, Division of Human Health, Vienna (Austria); IAEA, Nuclear Medicine Section, Division of Human Health, Wagramer Strasse 5, P.O. Box 100, Wien (Austria); Bal, Chandrasekhar [All India Institute of Medical Sciences, Department of Nuclear Medicine, New Delhi (India); Maes, Alex [Katholieke Universiteit Leuven, Leuven (Belgium); AZ Groeninge, Department of Nuclear Medicine, Kortrijk (Belgium); Tepmongkol, Supatporn [Chulalongkorn University, Nuclear Medicine Division, Department of Radiology, Bangkok (Thailand); Vazquez, Silvia [Instituto de Investigaciones Neurologicas, FLENI, Department of Radiology, Buenos Aires (Argentina); Dupont, Patrick [Katholieke Universiteit Leuven, Leuven (Belgium); Dondi, Maurizio [Ospedale Maggiore, Department of Nuclear Medicine, Bologna (Italy); International Atomic Energy Agency (IAEA), Nuclear Medicine Section, Division of Human Health, Vienna (Austria)

    2008-01-15

    MR imaging, ictal single-photon emission CT (SPECT) and ictal EEG play important roles in the presurgical localization of epileptic foci. This multi-center study was established to investigate whether the complementary role of perfusion SPECT, MRI and EEG for presurgical localization of temporal lobe epilepsy could be confirmed in a prospective setting involving centers from India, Thailand, Italy and Argentina. We studied 74 patients who underwent interictal and ictal EEG, interictal and ictal SPECT and MRI before surgery of the temporal lobe. In all but three patients, histology was reported. The clinical outcome was assessed using Engel's classification. Sensitivity values of all imaging modalities were calculated, and the add-on value of SPECT was assessed. Outcome (Engel's classification) in 74 patients was class I, 89%; class II, 7%; class III, 3%; and IV, 1%. Regarding the localization of seizure origin, sensitivity was 84% for ictal SPECT, 70% for ictal EEG, 86% for MRI, 55% for interictal SPECT and 40% for interictal EEG. Add-on value of ictal SPECT was shown by its ability to correctly localize 17/22 (77%) of the seizure foci missed by ictal EEG and 8/10 (80%) of the seizure foci not detected by MRI. This prospective multi-center trial, involving centers from different parts of the world, confirms that ictal perfusion SPECT is an effective diagnostic modality for correctly identifying seizure origin in temporal lobe epilepsy, providing complementary information to ictal EEG and MRI. (orig.)

  8. Comparative analysis of MR imaging, Ictal SPECT and EEG in temporal lobe epilepsy: a prospective IAEA multi-center study

    International Nuclear Information System (INIS)

    Zaknun, John J.; Bal, Chandrasekhar; Maes, Alex; Tepmongkol, Supatporn; Vazquez, Silvia; Dupont, Patrick; Dondi, Maurizio

    2008-01-01

    MR imaging, ictal single-photon emission CT (SPECT) and ictal EEG play important roles in the presurgical localization of epileptic foci. This multi-center study was established to investigate whether the complementary role of perfusion SPECT, MRI and EEG for presurgical localization of temporal lobe epilepsy could be confirmed in a prospective setting involving centers from India, Thailand, Italy and Argentina. We studied 74 patients who underwent interictal and ictal EEG, interictal and ictal SPECT and MRI before surgery of the temporal lobe. In all but three patients, histology was reported. The clinical outcome was assessed using Engel's classification. Sensitivity values of all imaging modalities were calculated, and the add-on value of SPECT was assessed. Outcome (Engel's classification) in 74 patients was class I, 89%; class II, 7%; class III, 3%; and IV, 1%. Regarding the localization of seizure origin, sensitivity was 84% for ictal SPECT, 70% for ictal EEG, 86% for MRI, 55% for interictal SPECT and 40% for interictal EEG. Add-on value of ictal SPECT was shown by its ability to correctly localize 17/22 (77%) of the seizure foci missed by ictal EEG and 8/10 (80%) of the seizure foci not detected by MRI. This prospective multi-center trial, involving centers from different parts of the world, confirms that ictal perfusion SPECT is an effective diagnostic modality for correctly identifying seizure origin in temporal lobe epilepsy, providing complementary information to ictal EEG and MRI. (orig.)

  9. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil

    2015-08-19

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  10. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil; Alkhalifah, Tariq Ali

    2015-01-01

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  11. Databases spatiotemporal taxonomy with moving objects. Theme review

    Directory of Open Access Journals (Sweden)

    Sergio Alejandro Rojas Barbosa

    2018-01-01

    Full Text Available Context: In the last decade, databases have evolved so much that we no longer speak only of spatial databases, but also of spatial and temporal databases. This means that the event or record has a spatial or localization variable and a temporality variable, which allows updating the previously stored record. Method: This paper presents a bibliographic review about concepts, spatio-temporal data models, specifically the models of data in movement. Results: Taxonomic considerations of the queries are presented in the models of data in movement, according to the persistence of the query (time, location, movement, object and patterns, as well as the different proposals of indexes and structures. Conclusions: The implementation of model proposals, such as indexes and structures, can lead to standardization problems. This is why it should be standardized under the standards and standards of the OGC (Open Geospatial Consortium.

  12. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    Science.gov (United States)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  13. Power source roadmaps using bibliometrics and database tomography

    International Nuclear Information System (INIS)

    Kostoff, R.N.; Tshiteya, R.; Pfeil, K.M.; Humenik, J.A.; Karypis, G.

    2005-01-01

    Database Tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multi-word phrase frequencies and phrase proximities (physical closeness of the multi-word technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a Power Sources database derived from the Science Citation Index. Phrase frequency analysis by the technical domain experts provided the pervasive technical themes of the Power Sources database, and the phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the Power Sources literature supplemented the DT results with author/journal/institution/country publication and citation data

  14. Mapping transient hyperventilation induced alterations with estimates of the multi-scale dynamics of BOLD signal.

    Directory of Open Access Journals (Sweden)

    Vesa J Kiviniemi

    2009-07-01

    Full Text Available Temporal blood oxygen level dependent (BOLD contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD trends of the form 1/f α. Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant α, fractal dimension Df, and, Hurst exponent H characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The α was able to differentiate also blood vessels from grey matter changes. Df was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow.

  15. Mapping Transient Hyperventilation Induced Alterations with Estimates of the Multi-Scale Dynamics of BOLD Signal.

    Science.gov (United States)

    Kiviniemi, Vesa; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Haapea, Marianne; Silven, Olli; Tervonen, Osmo

    2009-01-01

    Temporal blood oxygen level dependent (BOLD) contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD) trends of the form 1/f(alpha). Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF) after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant alpha, fractal dimension D(f), and, Hurst exponent H) characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The alpha was able to differentiate also blood vessels from grey matter changes. D(f) was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow.

  16. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    DEFF Research Database (Denmark)

    Thomsen, Nanna Isbak; Troldborg, Mads; McKnight, Ursula S.

    2012-01-01

    site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level...... the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We...... propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same...

  17. AUTOMATIC CLOUD DETECTION FROM MULTI-TEMPORAL SATELLITE IMAGES: TOWARDS THE USE OF PLÉIADES TIME SERIES

    Directory of Open Access Journals (Sweden)

    N. Champion

    2012-08-01

    Full Text Available Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images and is based on a region-growing procedure. Seeds (corresponding to clouds are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images. Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011. In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.

  18. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  19. SMAP Multi-Temporal Soil Moisture and Vegetation Optical Depth Retrievals in Vegetated Regions Including Higher-Order Soil-Canopy Interactions

    Science.gov (United States)

    Feldman, A.; Akbar, R.; Konings, A. G.; Piles, M.; Entekhabi, D.

    2017-12-01

    The Soil Moisture Active Passive (SMAP) mission utilizes a zeroth order radiative transfer model, known as the tau-omega model, to retrieve soil moisture from microwave brightness temperature observations. This model neglects first order scattering which is significant at L-Band in vegetated regions, or 30% of land cover. Previous higher order algorithms require extensive in-situ measurements and characterization of canopy layer physical properties. We propose a first order retrieval algorithm that approximately characterizes the eight first order emission pathways using rough surface reflectivity, vegetation optical depth (VOD), and scattering albedo terms. The recently developed Multi-Temporal Dual Channel Algorithm (MT-DCA) then retrieves these three parameters in a forward model without ancillary information under the assumption of temporally static albedo and constant vegetation water content between three day SMAP revisits. The approximated scattering terms are determined to be conservative estimates of analytically derived first order scattering terms. In addition, we find the first order algorithm to be more sensitive to surface emission than the tau-omega model. The simultaneously retrieved VOD, previously demonstrated to be proportional to vegetation water content, can provide insight into vegetation dynamics in regions with significant phenology. Specifically, dry tropical forests exhibit an increase in VOD during the dry season in alignment with prior studies that suggest that certain vegetative species green up during the dry season despite limited water availability. VOD retrieved using the first order algorithm and MT-DCA framework can therefore contribute to understanding of tropical forests' role in the carbon, energy, and water cycles, which has yet to be fully explained.

  20. Scalable Earth-observation Analytics for Geoscientists: Spacetime Extensions to the Array Database SciDB

    Science.gov (United States)

    Appel, Marius; Lahn, Florian; Pebesma, Edzer; Buytaert, Wouter; Moulds, Simon

    2016-04-01

    Today's amount of freely available data requires scientists to spend large parts of their work on data management. This is especially true in environmental sciences when working with large remote sensing datasets, such as obtained from earth-observation satellites like the Sentinel fleet. Many frameworks like SpatialHadoop or Apache Spark address the scalability but target programmers rather than data analysts, and are not dedicated to imagery or array data. In this work, we use the open-source data management and analytics system SciDB to bring large earth-observation datasets closer to analysts. Its underlying data representation as multidimensional arrays fits naturally to earth-observation datasets, distributes storage and computational load over multiple instances by multidimensional chunking, and also enables efficient time-series based analyses, which is usually difficult using file- or tile-based approaches. Existing interfaces to R and Python furthermore allow for scalable analytics with relatively little learning effort. However, interfacing SciDB and file-based earth-observation datasets that come as tiled temporal snapshots requires a lot of manual bookkeeping during ingestion, and SciDB natively only supports loading data from CSV-like and custom binary formatted files, which currently limits its practical use in earth-observation analytics. To make it easier to work with large multi-temporal datasets in SciDB, we developed software tools that enrich SciDB with earth observation metadata and allow working with commonly used file formats: (i) the SciDB extension library scidb4geo simplifies working with spatiotemporal arrays by adding relevant metadata to the database and (ii) the Geospatial Data Abstraction Library (GDAL) driver implementation scidb4gdal allows to ingest and export remote sensing imagery from and to a large number of file formats. Using added metadata on temporal resolution and coverage, the GDAL driver supports time-based ingestion of

  1. Large-Area Landslides Monitoring Using Advanced Multi-Temporal InSAR Technique over the Giant Panda Habitat, Sichuan, China

    Directory of Open Access Journals (Sweden)

    Panpan Tang

    2015-07-01

    Full Text Available The region near Dujiangyan City and Wenchuan County, Sichuan China, including significant giant panda habitats, was severely impacted by the Wenchuan earthquake. Large-area landslides occurred and seriously threatened the lives of people and giant pandas. In this paper, we report the development of an enhanced multi-temporal interferometric synthetic aperture radar (MTInSAR methodology to monitor potential post-seismic landslides by analyzing coherent scatterers (CS and distributed scatterers (DS points extracted from multi-temporal l-band ALOS/PALSAR data in an integrated manner. Through the integration of phase optimization and mitigation of the orbit and topography-related phase errors, surface deformations in the study area were derived: the rates in the line of sight (LOS direction ranged from −7 to 1.5 cm/a. Dozens of potential landslides, distributed mainly along the Minjiang River, Longmenshan Fault, and in other the high-altitude areas were detected. These findings matched the distribution of previous landslides. InSAR-derived results demonstrated that some previous landslides were still active; many unstable slopes have developed, and there are significant probabilities of future massive failures. The impact of landslides on the giant panda habitat, however ranged from low to moderate, would continue to be a concern for conservationists for some time in the future.

  2. Storing XML Documents in Databases

    NARCIS (Netherlands)

    A.R. Schmidt; S. Manegold (Stefan); M.L. Kersten (Martin); L.C. Rivero; J.H. Doorn; V.E. Ferraggine

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML

  3. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    Science.gov (United States)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  4. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation

    International Nuclear Information System (INIS)

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat

    2012-01-01

    Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360° through anthropomorphic voxelized female chest and head (0° and 30° tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI vol and multiplying by a physical CTDI vol measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30° relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are

  5. Multi-Dimensional Aggregation for Temporal Data

    DEFF Research Database (Denmark)

    Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard

    2006-01-01

    Business Intelligence solutions, encompassing technologies such as multi-dimensional data modeling and aggregate query processing, are being applied increasingly to non-traditional data. This paper extends multi-dimensional aggregation to apply to data with associated interval values that capture...... that the data holds for each point in the interval, as well as the case where the data holds only for the entire interval, but must be adjusted to apply to sub-intervals. The paper reports on an implementation of the new operator and on an empirical study that indicates that the operator scales to large data...

  6. A METHOD TO ESTIMATE TEMPORAL INTERACTION IN A CONDITIONAL RANDOM FIELD BASED APPROACH FOR CROP RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. M. A. Diaz

    2016-06-01

    Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  7. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    Science.gov (United States)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  8. Applications of TRMM-based Multi-Satellite Precipitation Estimation for Global Runoff Simulation: Prototyping a Global Flood Monitoring System

    Science.gov (United States)

    Hong, Yang; Adler, Robert F.; Huffman, George J.; Pierce, Harold

    2008-01-01

    Advances in flood monitoring/forecasting have been constrained by the difficulty in estimating rainfall continuously over space (catchment-, national-, continental-, or even global-scale areas) and flood-relevant time scale. With the recent availability of satellite rainfall estimates at fine time and space resolution, this paper describes a prototype research framework for global flood monitoring by combining real-time satellite observations with a database of global terrestrial characteristics through a hydrologically relevant modeling scheme. Four major components included in the framework are (1) real-time precipitation input from NASA TRMM-based Multi-satellite Precipitation Analysis (TMPA); (2) a central geospatial database to preprocess the land surface characteristics: water divides, slopes, soils, land use, flow directions, flow accumulation, drainage network etc.; (3) a modified distributed hydrological model to convert rainfall to runoff and route the flow through the stream network in order to predict the timing and severity of the flood wave, and (4) an open-access web interface to quickly disseminate flood alerts for potential decision-making. Retrospective simulations for 1998-2006 demonstrate that the Global Flood Monitor (GFM) system performs consistently at both station and catchment levels. The GFM website (experimental version) has been running at near real-time in an effort to offer a cost-effective solution to the ultimate challenge of building natural disaster early warning systems for the data-sparse regions of the world. The interactive GFM website shows close-up maps of the flood risks overlaid on topography/population or integrated with the Google-Earth visualization tool. One additional capability, which extends forecast lead-time by assimilating QPF into the GFM, also will be implemented in the future.

  9. Estimating Phenomenological Parameters in Multi-Assets Markets

    Science.gov (United States)

    Raffaelli, Giacomo; Marsili, Matteo

    Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.

  10. Incorporating Satellite Precipitation Estimates into a Radar-Gauge Multi-Sensor Precipitation Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Yuxiang He

    2018-01-01

    Full Text Available This paper presents a new and enhanced fusion module for the Multi-Sensor Precipitation Estimator (MPE that would objectively blend real-time satellite quantitative precipitation estimates (SQPE with radar and gauge estimates. This module consists of a preprocessor that mitigates systematic bias in SQPE, and a two-way blending routine that statistically fuses adjusted SQPE with radar estimates. The preprocessor not only corrects systematic bias in SQPE, but also improves the spatial distribution of precipitation based on SQPE and makes it closely resemble that of radar-based observations. It uses a more sophisticated radar-satellite merging technique to blend preprocessed datasets, and provides a better overall QPE product. The performance of the new satellite-radar-gauge blending module is assessed using independent rain gauge data over a five-year period between 2003–2007, and the assessment evaluates the accuracy of newly developed satellite-radar-gauge (SRG blended products versus that of radar-gauge products (which represents MPE algorithm currently used in the NWS (National Weather Service operations over two regions: (I Inside radar effective coverage and (II immediately outside radar coverage. The outcomes of the evaluation indicate (a ingest of SQPE over areas within effective radar coverage improve the quality of QPE by mitigating the errors in radar estimates in region I; and (b blending of radar, gauge, and satellite estimates over region II leads to reduction of errors relative to bias-corrected SQPE. In addition, the new module alleviates the discontinuities along the boundaries of radar effective coverage otherwise seen when SQPE is used directly to fill the areas outside of effective radar coverage.

  11. Temporal Decorrelation Effect in Carbon Stocks Estimation Using Polarimetric Interferometry Synthetic Aperture Radar (PolInSAR (Case Study: Southeast Sulawesi Tropical Forest

    Directory of Open Access Journals (Sweden)

    Laode M Golok Jaya

    2017-07-01

    Full Text Available This paper was aimed to analyse the effect of temporal decorrelation in carbon stocks estimation. Estimation of carbon stocks plays important roles particularly to understand the global carbon cycle in the atmosphere regarding with climate change mitigation effort. PolInSAR technique combines the advantages of Polarimetric Synthetic Aperture Radar (PolSAR and Interferometry Synthetic Aperture Radar (InSAR technique, which is evidenced to have significant contribution in radar mapping technology in the last few years. In carbon stocks estimation, PolInSAR provides information about vertical vegetation structure to estimate carbon stocks in the forest layers. Two coherence Synthetic Aperture Radar (SAR images of ALOS PALSAR full-polarimetric with 46 days temporal baseline were used in this research. The study was carried out in Southeast Sulawesi tropical forest. The research method was by comparing three interferometric phase coherence images affected by temporal decorrelation and their impacts on Random Volume over Ground (RvoG model. This research showed that 46 days temporal baseline has a significant impact to estimate tree heights of the forest cover where the accuracy decrease from R2=0.7525 (standard deviation of tree heights is 2.75 meters to R2=0.4435 (standard deviation 4.68 meters and R2=0.3772 (standard deviation 3.15 meters respectively. However, coherence optimisation can provide the best coherence image to produce a good accuracy of carbon stocks.

  12. A multi-resolution envelope-power based model for speech intelligibility

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Ewert, Stephan D.; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well...... to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments...... with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv...

  13. Tibetan Magmatism Database

    Science.gov (United States)

    Chapman, James B.; Kapp, Paul

    2017-11-01

    A database containing previously published geochronologic, geochemical, and isotopic data on Mesozoic to Quaternary igneous rocks in the Himalayan-Tibetan orogenic system are presented. The database is intended to serve as a repository for new and existing igneous rock data and is publicly accessible through a web-based platform that includes an interactive map and data table interface with search, filtering, and download options. To illustrate the utility of the database, the age, location, and ɛHft composition of magmatism from the central Gangdese batholith in the southern Lhasa terrane are compared. The data identify three high-flux events, which peak at 93, 50, and 15 Ma. They are characterized by inboard arc migration and a temporal and spatial shift to more evolved isotopic compositions.

  14. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  15. The evolution of active Lavina di Roncovetro landslides by multi-temporal high-resolution topographic data

    Science.gov (United States)

    Isola, Ilaria; Fornaciai, Alessandro; Favalli, Massimiliano; Gigli, Giovanni; Nannipieri, Luca; Mucchi, Lorenzo; Intrieri, Emanuele; Pizziolo, Marco; Bertolini, Giovanni; Trippi, Federico; Casagli, Nicola; Schina, Rosa; Carnevale, Ennio

    2017-04-01

    High-resolution topographic data has been collected over the Lavina di Roncovetro active landslide (Reggio Emilia, Italy) for about 3 years by using various methods and technologies. Tha Lavina di Roncovetro landslide can be considered as a fluid-viscous mudflow, which can reach a down flow maximum rate of 10 m/day. The landslide started between the middle and the end of the XIX century and since then it has had a rapid evolution mainly characterized by the rapid retrogression of the crown to the extent that now reaches the top of Mount Staffola. In the frame of EU Wireless Sensor Network for Ground Instability Monitoring - Wi-GIM project (LIFE12ENV/IT/001033) the Lavina di Roncovetro landslide has been periodically tracked using technologies that span from the LiDAR, both terrestrial and aerial, to the Structure from Motion (SfM) photogrammetry method based on Unmanned Aerial Vehicle (UAV) and aerial survey. These data are used to create six high-resolution Digital Terrain Models (DEMs), which imaged the landslide surface on March 2014, October 2014, June 2015, July 2015, January 2016 and December 2016. Multi-temporal high-resolution topographic data have been used for qualitative and quantitative morphometric analysis and topographic change detection of the landslide with the aim to estimate and map the volume of removed and/or accumulated material, the average rates of vertical and horizontal displacement and the deformation structures affecting the landslide over the investigated period.

  16. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function

    OpenAIRE

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-01

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesi...

  17. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    Science.gov (United States)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from

  18. Rainfall Erosivity Database on the European Scale (REDES): A product of a high temporal resolution rainfall data collection in Europe

    Science.gov (United States)

    Panagos, Panos; Ballabio, Cristiano; Borrelli, Pasquale; Meusburger, Katrin; Alewell, Christine

    2016-04-01

    The erosive force of rainfall is expressed as rainfall erosivity. Rainfall erosivity considers the rainfall amount and intensity, and is most commonly expressed as the R-factor in the (R)USLE model. The R-factor is calculated from a series of single storm events by multiplying the total storm kinetic energy with the measured maximum 30-minutes rainfall intensity. This estimation requests high temporal resolution (e.g. 30 minutes) rainfall data for sufficiently long time periods (i.e. 20 years) which are not readily available at European scale. The European Commission's Joint Research Centre(JRC) in collaboration with national/regional meteorological services and Environmental Institutions made an extensive data collection of high resolution rainfall data in the 28 Member States of the European Union plus Switzerland in order to estimate rainfall erosivity in Europe. This resulted in the Rainfall Erosivity Database on the European Scale (REDES) which included 1,541 rainfall stations in 2014 and has been updated with 134 additional stations in 2015. The interpolation of those point R-factor values with a Gaussian Process Regression (GPR) model has resulted in the first Rainfall Erosivity map of Europe (Science of the Total Environment, 511, 801-815). The intra-annual variability of rainfall erosivity is crucial for modelling soil erosion on a monthly and seasonal basis. The monthly feature of rainfall erosivity has been added in 2015 as an advancement of REDES and the respective mean annual R-factor map. Almost 19,000 monthly R-factor values of REDES contributed to the seasonal and monthly assessments of rainfall erosivity in Europe. According to the first results, more than 50% of the total rainfall erosivity in Europe takes place in the period from June to September. The spatial patterns of rainfall erosivity have significant differences between Northern and Southern Europe as summer is the most erosive period in Central and Northern Europe and autumn in the

  19. Vegetation chlorophyll estimates in the Amazon from multi-angle MODIS observations and canopy reflectance model

    Science.gov (United States)

    Hilker, Thomas; Galvão, Lênio Soares; Aragão, Luiz E. O. C.; de Moura, Yhasmin M.; do Amaral, Cibele H.; Lyapustin, Alexei I.; Wu, Jin; Albert, Loren P.; Ferreira, Marciel José; Anderson, Liana O.; dos Santos, Victor A. H. F.; Prohaska, Neill; Tribuzy, Edgard; Barbosa Ceron, João Vitor; Saleska, Scott R.; Wang, Yujie; de Carvalho Gonçalves, José Francisco; de Oliveira Junior, Raimundo Cosme; Cardoso Rodrigues, João Victor Figueiredo; Garcia, Maquelle Neves

    2017-06-01

    As a preparatory study for future hyperspectral missions that can measure canopy chemistry, we introduce a novel approach to investigate whether multi-angle Moderate Resolution Imaging Spectroradiometer (MODIS) data can be used to generate a preliminary database with long-term estimates of chlorophyll. MODIS monthly chlorophyll estimates between 2000 and 2015, derived from a fully coupled canopy reflectance model (ProSAIL), were inspected for consistency with eddy covariance fluxes, tower-based hyperspectral images and chlorophyll measurements. MODIS chlorophyll estimates from the inverse model showed strong seasonal variations across two flux-tower sites in central and eastern Amazon. Marked increases in chlorophyll concentrations were observed during the early dry season. Remotely sensed chlorophyll concentrations were correlated to field measurements (r2 = 0.73 and r2 = 0.98) but the data deviated from the 1:1 line with root mean square errors (RMSE) ranging from 0.355 μg cm-2 (Tapajós tower) to 0.470 μg cm-2 (Manaus tower). The chlorophyll estimates were consistent with flux tower measurements of photosynthetically active radiation (PAR) and net ecosystem productivity (NEP). We also applied ProSAIL to mono-angle hyperspectral observations from a camera installed on a tower to scale modeled chlorophyll pigments to MODIS observations (r2 = 0.73). Chlorophyll pigment concentrations (ChlA+B) were correlated to changes in the amount of young and mature leaf area per month (0.59 ≤ r2 ≤ 0.64). Increases in MODIS observed ChlA+B were preceded by increased PAR during the dry season (0.61 ≤ r2 ≤ 0.62) and followed by changes in net carbon uptake. We conclude that, at these two sites, changes in LAI, coupled with changes in leaf chlorophyll, are comparable with seasonality of plant productivity. Our results allowed the preliminary development of a 15-year time series of chlorophyll estimates over the Amazon to support canopy chemistry studies using future

  20. Optimizing Temporal Queries

    DEFF Research Database (Denmark)

    Toman, David; Bowman, Ivan Thomas

    2003-01-01

    Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often, t...

  1. Multi-temporal image co-registration improvement for a better representation and quantification of risky situations: the Belvedere Glacier case study

    Directory of Open Access Journals (Sweden)

    Enrico Borgogno Mondino

    2015-07-01

    Full Text Available Scientific applications dealing with natural hazards make wide use of digital geographical data and change detection techniques. If the attention is focused on changes affecting surfaces’ geometry, multi-temporal aerial photogrammetry can represent an effective tool. In this case, the degree of spatial coherence between measurements at different times is an important issue to deal with. Reliability and accuracy of measured differences strictly depend on the strategy used during image processing. In this paper, a simultaneous multi-temporal aerial image bundle adjustment approach (MTBA is compared against two more traditional strategies for aerial stereo-pair adjustment to map surface changes of the Belvedere Glacier (Italian north-western Alps in the period 2001–2003. Two aerial stereo pairs (of 2001 and 2003 were used to generate the correspondent digital surface models. These were then compared to map glacier shape differences and calculate ablation and accumulation volumes. Results demonstrate that the proposed MTBA approach improves and maximizes accuracy and reliability of measured differences also when available reference data are low quality ones. Final uncertainty for both direct (surface height differences and derived (volume changes measurements were quantified and mapped.

  2. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  3. Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction.

    Science.gov (United States)

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid

    2012-01-01

    In High Angular Resolution Diffusion Imaging (HARDI), Orientation Distribution Function (ODF) and Ensemble Average Propagator (EAP) are two important Probability Density Functions (PDFs) which reflect the water diffusion and fiber orientations. Spherical Polar Fourier Imaging (SPFI) is a recent model-free multi-shell HARDI method which estimates both EAP and ODF from the diffusion signals with multiple b values. As physical PDFs, ODFs and EAPs are nonnegative definite respectively in their domains S2 and R3. However, existing ODF/EAP estimation methods like SPFI seldom consider this natural constraint. Although some works considered the nonnegative constraint on the given discrete samples of ODF/EAP, the estimated ODF/EAP is not guaranteed to be nonnegative definite in the whole continuous domain. The Riemannian framework for ODFs and EAPs has been proposed via the square root parameterization based on pre-estimated ODFs and EAPs by other methods like SPFI. However, there is no work on how to estimate the square root of ODF/EAP called as the wavefuntion directly from diffusion signals. In this paper, based on the Riemannian framework for ODFs/EAPs and Spherical Polar Fourier (SPF) basis representation, we propose a unified model-free multi-shell HARDI method, named as Square Root Parameterized Estimation (SRPE), to simultaneously estimate both the wavefunction of EAPs and the nonnegative definite ODFs and EAPs from diffusion signals. The experiments on synthetic data and real data showed SRPE is more robust to noise and has better EAP reconstruction than SPFI, especially for EAP profiles at large radius.

  4. Data mining in time series databases

    CERN Document Server

    Kandel, Abraham; Bunke, Horst

    2004-01-01

    Adding the time dimension to real-world databases produces Time SeriesDatabases (TSDB) and introduces new aspects and difficulties to datamining and knowledge discovery. This book covers the state-of-the-artmethodology for mining time series databases. The novel data miningmethods presented in the book include techniques for efficientsegmentation, indexing, and classification of noisy and dynamic timeseries. A graph-based method for anomaly detection in time series isdescribed and the book also studies the implications of a novel andpotentially useful representation of time series as strings. Theproblem of detecting changes in data mining models that are inducedfrom temporal databases is additionally discussed.

  5. Advances in temporal logic

    CERN Document Server

    Fisher, Michael; Gabbay, Dov; Gough, Graham

    2000-01-01

    Time is a fascinating subject that has captured mankind's imagination from ancient times to the present. It has been, and continues to be studied across a wide range of disciplines, from the natural sciences to philosophy and logic. More than two decades ago, Pnueli in a seminal work showed the value of temporal logic in the specification and verification of computer programs. Today, a strong, vibrant international research community exists in the broad community of computer science and AI. This volume presents a number of articles from leading researchers containing state-of-the-art results in such areas as pure temporal/modal logic, specification and verification, temporal databases, temporal aspects in AI, tense and aspect in natural language, and temporal theorem proving. Earlier versions of some of the articles were given at the most recent International Conference on Temporal Logic, University of Manchester, UK. Readership: Any student of the area - postgraduate, postdoctoral or even research professor ...

  6. Inferring pregnancy episodes and outcomes within a network of observational databases.

    Directory of Open Access Journals (Sweden)

    Amy Matcho

    Full Text Available Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE, Truven MarketScan® Multi-state Medicaid (MDCD, and the Optum ClinFormatics® (Optum database and one non-US database, the United Kingdom (UK based Clinical Practice Research Datalink (CPRD. Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm's Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99-100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95-100%, while start date agreement within seven days in either direction ranged from 90-97%. In Optum validation sensitivity analysis, a total of 73% of

  7. Inferring pregnancy episodes and outcomes within a network of observational databases

    Science.gov (United States)

    Ryan, Patrick; Fife, Daniel; Gifkins, Dina; Knoll, Chris; Friedman, Andrew

    2018-01-01

    Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE), Truven MarketScan® Multi-state Medicaid (MDCD), and the Optum ClinFormatics® (Optum) database and one non-US database, the United Kingdom (UK) based Clinical Practice Research Datalink (CPRD). Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm’s Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99–100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95–100%, while start date agreement within seven days in either direction ranged from 90–97%. In Optum validation sensitivity analysis, a total of 73% of

  8. THE INTERNET PRESENTATION OF DATABASES OF GLACIERS OF THE SOUTH OF EASTERN SIBERIA

    Directory of Open Access Journals (Sweden)

    A. D. Kitov

    2017-01-01

    Full Text Available The authors consider the technology for creating databases of glaciers in Southern Siberia and the presentation of these databases on the Internet. The technology consists in the recognition and vectorization of spatial, multi-temporal data using GIS techniques, followed by the formation of databases that reflect the spatial and temporal variation of nival-glacial formations. The results of GIS design are presented on the website IG SB RAS and with the help of Internet service ArcGISonline on the public map. The mapping of databases shows the dynamic of nival-glacial formations for three time phases: the beginning of the 20th century (if you have data, its middle (the catalogs of glaciers and topographic maps and the beginning of the 21st century (according to satellite images and field research. Graphic objects are represented as point, line, and polygonal GIS-themes. Point-themes indicate parameters such as the center, lower and upper boundaries of the glacier. Line-themes determine the length and perimeter of the glacier. Polygonal-themes define the contour of the glacier and its area. The attributive table corresponds to the international standard World Glacier Inventory (WGI. The contours of the glaciers of northern Asia are represented conditionally (ellipses at international portals, and attribute characteristics correspond to the state that was displayed in catalogs of glaciers of the USSR, and they are inaccurate. Considered databases are devoid of these shortcomings. Coordinates of the center of glaciers have been refined. Glaciers contours have boundaries, appropriate to space images or topographic maps, in shp-file format. New glaciers of Baikalskiy and Barguzinskiy ridges are also presented. Existing catalogs and databases still do not include these glaciers. Features of the glaciers are examined in the context of the latitudinal transect of southern Siberia, from the Kodar ridge to the Eastern Sayan. GIS-analysis of the Databases

  9. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  10. Kalman-Filter-Based State Estimation for System Information Exchange in a Multi-bus Islanded Microgrid

    DEFF Research Database (Denmark)

    Wang, Yanbo; Tian, Yanjun; Wang, Xiongfei

    2014-01-01

    State monitoring and analysis of distribution systems has become an urgent issue, and state estimation serves as an important tool to deal with it. In this paper, a Kalman-Filter-based state estimation method for a multi-bus islanded microgrid is presented. First, an overall small signal model wi...

  11. Automatic structural parcellation of mouse brain MRI using multi-atlas label fusion.

    Directory of Open Access Journals (Sweden)

    Da Ma

    Full Text Available Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework.

  12. An Algebraic Framework for Temporal Attribute Characteristics

    DEFF Research Database (Denmark)

    Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard

    2006-01-01

    Most real-world database applications manage temporal data, i.e., data with associated time references that capture a temporal aspect of the data, typically either when the data is valid or when the data is known. Such applications abound in, e.g., the financial, medical, and scientific domains...

  13. Estimating the temporal distribution of exposure-related cancers

    International Nuclear Information System (INIS)

    Carter, R.L.; Sposto, R.; Preston, D.L.

    1993-09-01

    The temporal distribution of exposure-related cancers is relevant to the study of carcinogenic mechanisms. Statistical methods for extracting pertinent information from time-to-tumor data, however, are not well developed. Separation of incidence from 'latency' and the contamination of background cases are two problems. In this paper, we present methods for estimating both the conditional distribution given exposure-related cancers observed during the study period and the unconditional distribution. The methods adjust for confounding influences of background cases and the relationship between time to tumor and incidence. Two alternative methods are proposed. The first is based on a structured, theoretically derived model and produces direct inferences concerning the distribution of interest but often requires more-specialized software. The second relies on conventional modeling of incidence and is implemented through readily available, easily used computer software. Inferences concerning the effects of radiation dose and other covariates, however, are not always obtainable directly. We present three examples to illustrate the use of these two methods and suggest criteria for choosing between them. The first approach was used, with a log-logistic specification of the distribution of interest, to analyze times to bone sarcoma among a group of German patients injected with 224 Ra. Similarly, a log-logistic specification was used in the analysis of time to chronic myelogenous leukemias among male atomic-bomb survivors. We used the alternative approach, involving conventional modeling, to estimate the conditional distribution of exposure-related acute myelogenous leukemias among male atomic-bomb survivors, given occurrence between 1 October 1950 and 31 December 1985. All analyses were performed using Poisson regression methods for analyzing grouped survival data. (J.P.N.)

  14. Landslide Change Detection Based on Multi-Temporal Airborne LiDAR-Derived DEMs

    Directory of Open Access Journals (Sweden)

    Omar E. Mora

    2018-01-01

    Full Text Available Remote sensing technologies have seen extraordinary improvements in both spatial resolution and accuracy recently. In particular, airborne laser scanning systems can now provide data for surface modeling with unprecedented resolution and accuracy, which can effectively support the detection of sub-meter surface features, vital for landslide mapping. Also, the easy repeatability of data acquisition offers the opportunity to monitor temporal surface changes, which are essential to identifying developing or active slides. Specific methods are needed to detect and map surface changes due to landslide activities. In this paper, we present a methodology that is based on fusing probabilistic change detection and landslide surface feature extraction utilizing multi-temporal Light Detection and Ranging (LiDAR derived Digital Elevation Models (DEMs to map surface changes demonstrating landslide activity. The proposed method was tested in an area with numerous slides ranging from 200 m2 to 27,000 m2 in area under low vegetation and tree cover, Zanesville, Ohio, USA. The surface changes observed are probabilistically evaluated to determine the likelihood of the changes being landslide activity related. Next, based on surface features, a Support Vector Machine (SVM quantifies and maps the topographic signatures of landslides in the entire area. Finally, these two processes are fused to detect landslide prone changes. The results demonstrate that 53 out of 80 inventory mapped landslides were identified using this method. Additionally, some areas that were not mapped in the inventory map displayed changes that are likely to be developing landslides.

  15. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    Science.gov (United States)

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  16. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying

    2015-09-01

    Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.

  17. Methodology for heritage conservation in Belgium based on multi-temporal interferometry

    Science.gov (United States)

    Bejarano-Urrego, L.; Verstrynge, E.; Shimoni, M.; Lopez, J.; Walstra, J.; Declercq, P.-Y.; Derauw, D.; Hayen, R.; Van Balen, K.

    2017-09-01

    Soil differential settlements that cause structural damage to heritage buildings are precipitating cultural and economic value losses. Adequate damage assessment as well as protection and preservation of the built patrimony are priorities at national and local levels, so they require advanced integration and analysis of environmental, architectural and historical parameters. The GEPATAR project (GEotechnical and Patrimonial Archives Toolbox for ARchitectural conservation in Belgium) aims to create an online interactive geo-information tool that allows the user to view and to be informed about the Belgian heritage buildings at risk due to differential soil settlements. Multi-temporal interferometry techniques (MTI) have been proven to be a powerful technique for analyzing earth surface deformation patterns through time series of Synthetic Aperture Radar (SAR) images. These techniques allow to measure ground movements over wide areas at high precision and relatively low cost. In this project, Persistent Scatterer Synthetic Aperture Radar Interferometry (PS-InSAR) and Multidimensional Small Baseline Subsets (MSBAS) are used to measure and monitor the temporal evolution of surface deformations across Belgium. This information is integrated with the Belgian heritage data by means of an interactive toolbox in a GIS environment in order to identify the level of risk. At country scale, the toolbox includes ground deformation hazard maps, geological information, location of patrimony buildings and land use; while at local scale, it includes settlement rates, photographic and historical surveys as well as architectural and geotechnical information. Some case studies are investigated by means of on-site monitoring techniques and stability analysis to evaluate the applied approaches. This paper presents a description of the methodology being implemented in the project together with the case study of the Saint Vincent's church which is located on a former colliery zone. For

  18. Validity of 20-metre multi stage shuttle run test for estimation of ...

    African Journals Online (AJOL)

    Validity of 20-metre multi stage shuttle run test for estimation of maximum oxygen uptake in indian male university students. P Chatterjee, AK Banerjee, P Debnath, P Bas, B Chatterjee. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 461-467. Full Text:.

  19. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI.

    Science.gov (United States)

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn

    2016-03-01

    One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.

  20. Upper estimates of complexity of algorithms for multi-peg Tower of Hanoi problem

    Directory of Open Access Journals (Sweden)

    Sergey Novikov

    2007-06-01

    Full Text Available There are proved upper explicit estimates of complexity of lgorithms: for multi-peg Tower of Hanoi problem with the limited number of disks, for Reve's puzzle and for $5$-peg Tower of Hanoi problem with the free number of disks.

  1. HYBRID APPROACHES TO THE FORMALISATION OF EXPERT KNOWLEDGE CONCERNING TEMPORAL REGULARITIES IN THE TIME SERIES GROUP OF A SYSTEM MONITORING DATABASE

    Directory of Open Access Journals (Sweden)

    E. S. Staricov

    2016-01-01

    Full Text Available Objectives. The presented research problem concerns data regularities for an unspecified time series based on an approach to the expert formalisation of knowledge integrated into a decision-making mechanism. Method. A context-free grammar, consisting of a modification of universal temporal grammar, is used to describe regularities. Using the rules of the developed grammar, an expert can describe patterns in the group of time series. A multi-dimensional matrix pattern of the behaviour of a group of time series is used in a real-time decision-making regime in the expert system to implements a universal approach to the description of the dynamics of these changes in the expert system. The multidimensional matrix pattern is specifically intended for decision-making in an expert system; the modified temporal grammar is used to identify patterns in the data. Results. It is proposed to use the temporal relations of the series and fix observation values in the time interval as ―From-To‖, ―Before‖, ―After‖, ―Simultaneously‖ and ―Duration‖. A syntactically oriented converter of descriptions is developed. A schema for the creation and application of matrix patterns in expert systems is drawn up. Conclusion. The advantage of the implementation of the proposed hybrid approaches consists in a reduction of the time taken for identifying temporal patterns and an automation of the matrix pattern of the decision-making system based on expert descriptions verified using live data derived from relationships in the monitoring data. 

  2. Automatic sleep spindle detection: Benchmarking with fine temporal resolution using open science tools

    Directory of Open Access Journals (Sweden)

    Christian eO'Reilly

    2015-06-01

    Full Text Available Sleep spindle properties index cognitive faculties such as memory consolidation and diseases such as major depression. For this reason, scoring sleep spindle properties in polysomnographic recordings has become an important activity in both research and clinical settings. The tediousness of this manual task has motivated efforts for its automation. Although some progress has been made, increasing the temporal accuracy of spindle scoring and improving the performance assessment methodology are two aspects needing more attention. In this paper, four open-access automated spindle detectors with fine temporal resolution are proposed and tested against expert scoring of two proprietary and two open-access databases. Results highlight several findings: 1 that expert scoring and polysomnographic databases are important confounders when comparing the performance of spindle detectors tested using different databases or scorings; 2 because spindles are sparse events, specificity estimates are potentially misleading for assessing automated detector performance; 3 reporting the performance of spindle detectors exclusively with sensitivity and specificity estimates, as is often seen in the literature, is insufficient; including sensitivity, precision and a more comprehensive statistic such as Matthew’s correlation coefficient, F1-score, or Cohen’s κ is necessary for adequate evaluation; 4 reporting statistics for some reasonable range of decision thresholds provides a much more complete and useful benchmarking; 5 performance differences between tested automated detectors were found to be similar to those between available expert scorings; 6 much more development is needed to effectively compare the performance of spindle detectors developed by different research teams. Finally, this work clarifies a long-standing but only seldom posed question regarding whether expert scoring truly is a reliable gold standard for sleep spindle assessment.

  3. Estimation of the Alpha Factor Parameters Using the ICDE Database

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Hwang, M. J.; Han, S. H

    2007-04-15

    Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.

  4. Dry season biomass estimation as an indicator of rangeland quantity using multi-scale remote sensing data

    CSIR Research Space (South Africa)

    Ramoelo, Abel

    2014-10-01

    Full Text Available vegetation is green and photosynthetic active. During dry season, biomass estimation is always not plausible using vegetation indices. The aim of this study is to estimate dry biomass using the multi-scale remote sensing data in the savanna ecosystem. Field...

  5. A Hybrid Approach Combining the Multi-Temporal Scale Spatio-Temporal Network with the Continuous Triangular Model for Exploring Dynamic Interactions in Movement Data: A Case Study of Football

    Directory of Open Access Journals (Sweden)

    Pengdong Zhang

    2018-01-01

    Full Text Available Benefiting from recent advantages in location-aware technologies, movement data are becoming ubiquitous. Hence, numerous research topics with respect to movement data have been undertaken. Yet, the research of dynamic interactions in movement data is still in its infancy. In this paper, we propose a hybrid approach combining the multi-temporal scale spatio-temporal network (MTSSTN and the continuous triangular model (CTM for exploring dynamic interactions in movement data. The approach mainly includes four steps: first, the relative trajectory calculus (RTC is used to derive three types of interaction patterns; second, for each interaction pattern, a corresponding MTSSTN is generated; third, for each MTSSTN, the interaction intensity measures and three centrality measures (i.e., degree, betweenness and closeness are calculated; finally, the results are visualized at multiple temporal scales using the CTM and analyzed based on the generated CTM diagrams. Based on the proposed approach, three distinctive aims can be achieved for each interaction pattern at multiple temporal scales: (1 exploring the interaction intensities between any two individuals; (2 exploring the interaction intensities among multiple individuals, and (3 exploring the importance of each individual and identifying the most important individuals. The movement data obtained from a real football match are used as a case study to validate the effectiveness of the proposed approach. The results demonstrate that the proposed approach is useful in exploring dynamic interactions in football movement data and discovering insightful information.

  6. Multi-temporal AirSWOT elevations on the Willamette river: error characterization and algorithm testing

    Science.gov (United States)

    Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.

    2017-12-01

    We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).

  7. Proper orthogonal decomposition-based estimations of the flow field from particle image velocimetry wall-gradient measurements in the backward-facing step flow

    International Nuclear Information System (INIS)

    Nguyen, Thien Duy; Wells, John Craig; Mokhasi, Paritosh; Rempfer, Dietmar

    2010-01-01

    In this paper, particle image velocimetry (PIV) results from the recirculation zone of a backward-facing step flow, of which the Reynolds number is 2800 based on bulk velocity upstream of the step and step height (h = 16.5 mm), are used to demonstrate the capability of proper orthogonal decomposition (POD)-based measurement models. Three-component PIV velocity fields are decomposed by POD into a set of spatial basis functions and a set of temporal coefficients. The measurement models are built to relate the low-order POD coefficients, determined from an ensemble of 1050 PIV fields by the 'snapshot' method, to the time-resolved wall gradients, measured by a near-wall measurement technique called stereo interfacial PIV. These models are evaluated in terms of reconstruction and prediction of the low-order temporal POD coefficients of the velocity fields. In order to determine the estimation coefficients of the measurement models, linear stochastic estimation (LSE), quadratic stochastic estimation (QSE), principal component regression (PCR) and kernel ridge regression (KRR) are applied. We denote such approaches as LSE-POD, QSE-POD, PCR-POD and KRR-POD. In addition to comparing the accuracy of measurement models, we introduce multi-time POD-based estimations in which past and future information of the wall-gradient events is used separately or combined. The results show that the multi-time estimation approaches can improve the prediction process. Among these approaches, the proposed multi-time KRR-POD estimation with an optimized window of past wall-gradient information yields the best prediction. Such a multi-time KRR-POD approach offers a useful tool for real-time flow estimation of the velocity field based on wall-gradient data

  8. Advances in Understanding Air Pollution and Cardiovascular Diseases: The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air)

    Science.gov (United States)

    Kaufman, Joel D.; Spalt, Elizabeth W.; Curl, Cynthia L.; Hajat, Anjum; Jones, Miranda R.; Kim, Sun-Young; Vedal, Sverre; Szpiro, Adam A.; Gassett, Amanda; Sheppard, Lianne; Daviglus, Martha L.; Adar, Sara D.

    2016-01-01

    The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) leveraged the platform of the MESA cohort into a prospective longitudinal study of relationships between air pollution and cardiovascular health. MESA Air researchers developed fine-scale, state-of-the-art air pollution exposure models for the MESA Air communities, creating individual exposure estimates for each participant. These models combine cohort-specific exposure monitoring, existing monitoring systems, and an extensive database of geographic and meteorological information. Together with extensive phenotyping in MESA—and adding participants and health measurements to the cohort—MESA Air investigated environmental exposures on a wide range of outcomes. Advances by the MESA Air team included not only a new approach to exposure modeling but also biostatistical advances in addressing exposure measurement error and temporal confounding. The MESA Air study advanced our understanding of the impact of air pollutants on cardiovascular disease and provided a research platform for advances in environmental epidemiology. PMID:27741981

  9. Gridding artifacts on ENVISAT/MERIS temporal series

    NARCIS (Netherlands)

    Gómez Chova, L.; Zurita-Milla, R.; Alonso, L.; Guanter, L.; Amoros-Lopez, J.; Camps-Valls, G.; Moreno, J.; Lacoste-Francis, H.

    2010-01-01

    Earth observation satellites are a valuable source of data that can be used to better understand the Earth system dynamics. However, analysis of satellite image time series requires an accurate spatial co-registration so that the multi-temporal pixel entities offer a true temporal view of the study

  10. Constructing an XML database of linguistics data

    Directory of Open Access Journals (Sweden)

    J H Kroeze

    2010-04-01

    Full Text Available A language-oriented, multi-dimensional database of the linguistic characteristics of the Hebrew text of the Old Testament can enable researchers to do ad hoc queries. XML is a suitable technology to transform free text into a database. A clause’s word order can be kept intact while other features such as syntactic and semantic functions can be marked as elements or attributes. The elements or attributes from the XML “database” can be accessed and proces sed by a 4th generation programming language, such as Visual Basic. XML is explored as an option to build an exploitable database of linguistic data by representing inherently multi-dimensional data, including syntactic and semantic analyses of free text.

  11. Satellite rainfall monitoring over Africa for food security, using multi-channel MSG data

    Science.gov (United States)

    Chadwick, R.; Grimes, D.; Saunders, R.; Blackmore, T.; Francis, P.

    2009-04-01

    Near real-time rainfall estimates are crucial in sub-Saharan Africa for a variety of humanitarian and agricultural purposes. However, for economic and infrastructural reasons, regularly reporting rain-gauges are sparse and precipitation radar networks extremely rare. Satellite rainfall estimates, particularly from geostationary satellites such as Meteosat Second Generation (MSG), present one method of filling this information gap, as they produce data at high temporal and spatial resolution. An algorithm has been developed to produce rainfall estimates for Africa from multi-channel MSG data. The algorithm is calibrated using precipitation radar data collected in Niamey, Niger as part of the African Monsoon Multidisciplinary Analyses (AMMA) project in 2006, and is based on an algorithm used operationally over Europe by the UK Met Office. Contingency tables are used to establish a statistical relationship between multi-channel MSG data and probability of rainfall at several different rain-rate magnitudes as sensed by the radar. Rain-rate estimates can then be produced at a variety of spatial and temporal scales, with MSG scan length (15 minutes) and pixel size (3-4km) as the lower limit. Results will be presented of a validation of this algorithm over the Sahel region of Africa. Rainfall estimates from this algorithm, processed for 2004, will be validated against gridded rain-gauge data at a 0.5 degree and 10 day timescale suitable for drought monitoring purposes. A comparison will also be made against rainfall estimates from the TAMSAT algorithm, which uses single channel IR data from MSG, and has been shown to perform well in the Sahel region.

  12. GEMMER: GEnome-wide tool for Multi-scale Modeling data Extraction and Representation for Saccharomyces cerevisiae.

    Science.gov (United States)

    Mondeel, Thierry D G A; Crémazy, Frédéric; Barberis, Matteo

    2018-02-01

    Multi-scale modeling of biological systems requires integration of various information about genes and proteins that are connected together in networks. Spatial, temporal and functional information is available; however, it is still a challenge to retrieve and explore this knowledge in an integrated, quick and user-friendly manner. We present GEMMER (GEnome-wide tool for Multi-scale Modelling data Extraction and Representation), a web-based data-integration tool that facilitates high quality visualization of physical, regulatory and genetic interactions between proteins/genes in Saccharomyces cerevisiae. GEMMER creates network visualizations that integrate information on function, temporal expression, localization and abundance from various existing databases. GEMMER supports modeling efforts by effortlessly gathering this information and providing convenient export options for images and their underlying data. GEMMER is freely available at http://gemmer.barberislab.com. Source code, written in Python, JavaScript library D3js, PHP and JSON, is freely available at https://github.com/barberislab/GEMMER. M.Barberis@uva.nl. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.

  13. SUPPORT VECTOR MACHINE CLASSIFICATION OF OBJECT-BASED DATA FOR CROP MAPPING, USING MULTI-TEMPORAL LANDSAT IMAGERY

    Directory of Open Access Journals (Sweden)

    R. Devadas

    2012-07-01

    Full Text Available Crop mapping and time series analysis of agronomic cycles are critical for monitoring land use and land management practices, and analysing the issues of agro-environmental impacts and climate change. Multi-temporal Landsat data can be used to analyse decadal changes in cropping patterns at field level, owing to its medium spatial resolution and historical availability. This study attempts to develop robust remote sensing techniques, applicable across a large geographic extent, for state-wide mapping of cropping history in Queensland, Australia. In this context, traditional pixel-based classification was analysed in comparison with image object-based classification using advanced supervised machine-learning algorithms such as Support Vector Machine (SVM. For the Darling Downs region of southern Queensland we gathered a set of Landsat TM images from the 2010–2011 cropping season. Landsat data, along with the vegetation index images, were subjected to multiresolution segmentation to obtain polygon objects. Object-based methods enabled the analysis of aggregated sets of pixels, and exploited shape-related and textural variation, as well as spectral characteristics. SVM models were chosen after examining three shape-based parameters, twenty-three textural parameters and ten spectral parameters of the objects. We found that the object-based methods were superior to the pixel-based methods for classifying 4 major landuse/land cover classes, considering the complexities of within field spectral heterogeneity and spectral mixing. Comparative analysis clearly revealed that higher overall classification accuracy (95% was observed in the object-based SVM compared with that of traditional pixel-based classification (89% using maximum likelihood classifier (MLC. Object-based classification also resulted speckle-free images. Further, object-based SVM models were used to classify different broadacre crop types for summer and winter seasons. The influence of

  14. Surface deformation monitoring of Sinabung volcano using multi temporal InSAR method and GIS analysis for affected area assessment

    Science.gov (United States)

    Aditiya, A.; Aoki, Y.; Anugrah, R. D.

    2018-04-01

    Sinabung Volcano which located in northern part of Sumatera island is part of a hundred active volcano in Indonesia. Surface deformation is detected over Sinabung Volcano and surrounded area since the first eruption in 2010 after 400 years long rest. We present multi temporal Interferometric Synthetic Aperture Radar (InSAR) time-series method of ALOS-2 L-band SAR data acquired from December 2014 to July 2017 to reveal surface deformation with high spatial resolution. The method includes focusing the SAR data, generating interferogram and phase unwrapping using SNAPHU tools. The result reveal significant deformation over Sinabung Volcano areas at rates up to 10 cm during observation period and the highest deformation occurs in western part which is trajectory of lava. We concluded the observed deformation primarily caused by volcanic activity respectively after long period of rest. In addition, Geographic Information System (GIS) analysis produces disaster affected areas of Sinabung eruption. GIS is reliable technique to estimate the impact of the hazard scenario to the exposure data and develop scenarios of disaster impacts to inform their contingency and emergency plan. The GIS results include the estimated affected area divided into 3 zones based on pyroclastic lava flow and pyroclastic fall (incandescent rock and ash). The highest impact is occurred in zone II due to many settlements are scattered in this zone. This information will be support stakeholders to take emergency preparation for disaster reduction. The continuation of this high rate of decline tends to endanger the population in next periods.

  15. Advancements in web-database applications for rabies surveillance

    Directory of Open Access Journals (Sweden)

    Bélanger Denise

    2011-08-01

    Full Text Available Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1 automatic integration of multi-agency data and diagnostic results on a daily basis; 2 a web-based data editing interface that enables authorized users to add, edit and extract data; and 3 an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from

  16. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarsely sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the

  17. Rapid Damage Assessment by Means of Multi-Temporal SAR — A Comprehensive Review and Outlook to Sentinel-1

    Directory of Open Access Journals (Sweden)

    Simon Plank

    2014-05-01

    Full Text Available Fast crisis response after natural disasters, such as earthquakes and tropical storms, is necessary to support, for instance, rescue, humanitarian, and reconstruction operations in the crisis area. Therefore, rapid damage mapping after a disaster is crucial, i.e., to detect the affected area, including grade and type of damage. Thereby, satellite remote sensing plays a key role due to its fast response, wide field of view, and low cost. With the increasing availability of remote sensing data, numerous methods have been developed for damage assessment. This article gives a comprehensive review of these techniques focusing on multi-temporal SAR procedures for rapid damage assessment: interferometric coherence and intensity correlation. The review is divided into six parts: First, methods based on coherence; second, the ones using intensity correlation; and third, techniques using both methodologies combined to increase the accuracy of the damage assessment are reviewed. Next, studies using additional data (e.g., GIS and optical imagery to support the damage assessment and increase its accuracy are reported. Moreover, selected studies on post-event SAR damage assessment techniques and examples of other applications of the interferometric coherence are presented. Then, the preconditions for a successful worldwide application of multi-temporal SAR methods for damage assessment and the limitations of current SAR satellite missions are reported. Finally, an outlook to the Sentinel-1 SAR mission shows possible solutions of these limitations, enabling a worldwide applicability of the presented damage assessment methods.

  18. A 3D convolutional neural network approach to land cover classification using LiDAR and multi-temporal Landsat imagery

    Science.gov (United States)

    Xu, Z.; Guan, K.; Peng, B.; Casler, N. P.; Wang, S. W.

    2017-12-01

    Landscape has complex three-dimensional features. These 3D features are difficult to extract using conventional methods. Small-footprint LiDAR provides an ideal way for capturing these features. Existing approaches, however, have been relegated to raster or metric-based (two-dimensional) feature extraction from the upper or bottom layer, and thus are not suitable for resolving morphological and intensity features that could be important to fine-scale land cover mapping. Therefore, this research combines airborne LiDAR and multi-temporal Landsat imagery to classify land cover types of Williamson County, Illinois that has diverse and mixed landscape features. Specifically, we applied a 3D convolutional neural network (CNN) method to extract features from LiDAR point clouds by (1) creating occupancy grid, intensity grid at 1-meter resolution, and then (2) normalizing and incorporating data into a 3D CNN feature extractor for many epochs of learning. The learned features (e.g., morphological features, intensity features, etc) were combined with multi-temporal spectral data to enhance the performance of land cover classification based on a Support Vector Machine classifier. We used photo interpretation for training and testing data generation. The classification results show that our approach outperforms traditional methods using LiDAR derived feature maps, and promises to serve as an effective methodology for creating high-quality land cover maps through fusion of complementary types of remote sensing data.

  19. Multi-dimension feature fusion for action recognition

    Science.gov (United States)

    Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin

    2018-04-01

    Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.

  20. kCCA Transformation-Based Radiometric Normalization of Multi-Temporal Satellite Images

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2018-03-01

    Full Text Available Radiation normalization is an essential pre-processing step for generating high-quality satellite sequence images. However, most radiometric normalization methods are linear, and they cannot eliminate the regular nonlinear spectral differences. Here we introduce the well-established kernel canonical correlation analysis (kCCA into radiometric normalization for the first time to overcome this problem, which leads to a new kernel method. It can maximally reduce the image differences among multi-temporal images regardless of the imaging conditions and the reflectivity difference. It also perfectly eliminates the impact of nonlinear changes caused by seasonal variation of natural objects. Comparisons with the multivariate alteration detection (CCA-based normalization and the histogram matching, on Gaofen-1 (GF-1 data, indicate that the kCCA-based normalization can preserve more similarity and better correlation between an image-pair and effectively avoid the color error propagation. The proposed method not only builds the common scale or reference to make the radiometric consistency among GF-1 image sequences, but also highlights the interesting spectral changes while eliminates less interesting spectral changes. Our method enables the application of GF-1 data for change detection, land-use, land-cover change detection etc.

  1. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  2. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling

    Science.gov (United States)

    Mehdizadeh, Saeid

    2018-04-01

    Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external

  3. Artificial force fields for multi-agent simulations of maritime traffic and risk estimation

    NARCIS (Netherlands)

    Xiao, F.; Ligteringen, H.; Van Gulijk, C.; Ale, B.J.M.

    2012-01-01

    A probabilistic risk model is designed to estimate probabilities of collisions for shipping accidents in busy waterways. We propose a method based on multi-agent simulation that uses an artificial force field to model ship maneuvers. The artificial force field is calibrated by AIS data (Automatic

  4. Bi-temporal 3D active appearance models with applications to unsupervised ejection fraction estimation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Pedersen, Dorthe

    2005-01-01

    in four-dimensional MRI. The theoretical foundation of our work is the generative two-dimensional Active Appearance Models by Cootes et al., here extended to bi-temporal, three-dimensional models. Further issues treated include correction of respiratory induced slice displacements, systole detection......, and a texture model pruning strategy. Cross-validation carried out on clinical-quality scans of twelve volunteers indicates that ejection fraction and cardiac blood pool volumes can be estimated automatically and rapidly with accuracy on par with typical inter-observer variability....

  5. Evaluating methods to detect bark beetle-caused tree mortality using single-date and multi-date Landsat imagery

    Science.gov (United States)

    Arjan J. H. Meddens; Jeffrey A. Hicke; Lee A. Vierling; Andrew T. Hudak

    2013-01-01

    Bark beetles cause significant tree mortality in coniferous forests across North America. Mapping beetle-caused tree mortality is therefore important for gauging impacts to forest ecosystems and assessing trends. Remote sensing offers the potential for accurate, repeatable estimates of tree mortality in outbreak areas. With the advancement of multi-temporal disturbance...

  6. Assessment of temporal resolution of multi-detector row computed tomography in helical acquisition mode using the impulse method.

    Science.gov (United States)

    Ichikawa, Katsuhiro; Hara, Takanori; Urikura, Atsushi; Takata, Tadanori; Ohashi, Kazuya

    2015-06-01

    The purpose of this study was to propose a method for assessing the temporal resolution (TR) of multi-detector row computed tomography (CT) (MDCT) in the helical acquisition mode using temporal impulse signals generated by a metal ball passing through the acquisition plane. An 11-mm diameter metal ball was shot along the central axis at approximately 5 m/s during a helical acquisition, and the temporal sensitivity profile (TSP) was measured from the streak image intensities in the reconstructed helical CT images. To assess the validity, we compared the measured and theoretical TSPs for the 4-channel modes of two MDCT systems. A 64-channel MDCT system was used to compare TSPs and image quality of a motion phantom for the pitch factors P of 0.6, 0.8, 1.0 and 1.2 with a rotation time R of 0.5 s, and for two R/P combinations of 0.5/1.2 and 0.33/0.8. Moreover, the temporal transfer functions (TFs) were calculated from the obtained TSPs. The measured and theoretical TSPs showed perfect agreement. The TSP narrowed with an increase in the pitch factor. The image sharpness of the 0.33/0.8 combination was inferior to that of the 0.5/1.2 combination, despite their almost identical full width at tenth maximum values. The temporal TFs quantitatively confirmed these differences. The TSP results demonstrated that the TR in the helical acquisition mode significantly depended on the pitch factor as well as the rotation time, and the pitch factor and reconstruction algorithm affected the TSP shape. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Forecasting energy consumption of multi-family residential buildings using support vector regression: Investigating the impact of temporal and spatial monitoring granularity on performance accuracy

    International Nuclear Information System (INIS)

    Jain, Rishee K.; Smith, Kevin M.; Culligan, Patricia J.; Taylor, John E.

    2014-01-01

    Highlights: • We develop a building energy forecasting model using support vector regression. • Model is applied to data from a multi-family residential building in New York City. • We extend sensor based energy forecasting to multi-family residential buildings. • We examine the impact temporal and spatial granularity has on model accuracy. • Optimal granularity occurs at the by floor in hourly temporal intervals. - Abstract: Buildings are the dominant source of energy consumption and environmental emissions in urban areas. Therefore, the ability to forecast and characterize building energy consumption is vital to implementing urban energy management and efficiency initiatives required to curb emissions. Advances in smart metering technology have enabled researchers to develop “sensor based” approaches to forecast building energy consumption that necessitate less input data than traditional methods. Sensor-based forecasting utilizes machine learning techniques to infer the complex relationships between consumption and influencing variables (e.g., weather, time of day, previous consumption). While sensor-based forecasting has been studied extensively for commercial buildings, there is a paucity of research applying this data-driven approach to the multi-family residential sector. In this paper, we build a sensor-based forecasting model using Support Vector Regression (SVR), a commonly used machine learning technique, and apply it to an empirical data-set from a multi-family residential building in New York City. We expand our study to examine the impact of temporal (i.e., daily, hourly, 10 min intervals) and spatial (i.e., whole building, by floor, by unit) granularity have on the predictive power of our single-step model. Results indicate that sensor based forecasting models can be extended to multi-family residential buildings and that the optimal monitoring granularity occurs at the by floor level in hourly intervals. In addition to implications for

  8. Calibrated Multi-Temporal Edge Images for City Infrastructure Growth Assessment and Prediction

    Science.gov (United States)

    Al-Ruzouq, R.; Shanableh, A.; Boharoon, Z.; Khalil, M.

    2018-03-01

    Urban Growth or urbanization can be defined as the gradual process of city's population growth and infrastructure development. It is typically demonstrated by the expansion of a city's infrastructure, mainly development of its roads and buildings. Uncontrolled urban Growth in cities has been responsible for several problems that include living environment, drinking water, noise and air pollution, waste management, traffic congestion and hydraulic processes. Accurate identification of urban growth is of great importance for urban planning and water/land management. Recent advances in satellite imagery, in terms of improved spatial and temporal resolutions, allows for efficient identification of change patterns and the prediction of built-up areas. In this study, two approaches were adapted to quantify and assess the pattern of urbanization, in Ajman City at UAE, during the last three decades. The first approach relies on image processing techniques and multi-temporal Landsat satellite images with ground resolution varying between 15 to 60 meters. In this approach, the derived edge images (roads and buildings) were used as the basis of change detection. The second approach relies on digitizing features from high-resolution images captured at different years. The latest approach was adopted, as a reference and ground truth, to calibrate extracted edges from Landsat images. It has been found that urbanized area almost increased by 12 folds during the period 1975-2015 where the growth of buildings and roads were almost parallel until 2005 when the roads spatial expansion witnessed a steep increase due to the vertical expansion of the City. Extracted Edges features, were successfully used for change detection and quantification in term of buildings and roads.

  9. A Mixed Land Cover Spatio-temporal Data Model Based on Object-oriented and Snapshot

    Directory of Open Access Journals (Sweden)

    LI Yinchao

    2016-07-01

    Full Text Available Spatio-temporal data model (STDM is one of the hot topics in the domains of spatio-temporal database and data analysis. There is a common view that a universal STDM is always of high complexity due to the various situation of spatio-temporal data. In this article, a mixed STDM is proposed based on object-oriented and snapshot models for modelling and analyzing landcover change (LCC. This model uses the object-oriented STDM to describe the spatio-temporal processes of land cover patches and organize their spatial and attributive properties. In the meantime, it uses the snapshot STDM to present the spatio-temporal distribution of LCC on the whole via snapshot images. The two types of models are spatially and temporally combined into a mixed version. In addition to presenting the spatio-temporal events themselves, this model could express the transformation events between different classes of spatio-temporal objects. It can be used to create database for historical data of LCC, do spatio-temporal statistics, simulation and data mining with the data. In this article, the LCC data in Heilongjiang province is used for case study to validate spatio-temporal data management and analysis abilities of mixed STDM, including creating database, spatio-temporal query, global evolution analysis and patches spatio-temporal process expression.

  10. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    Science.gov (United States)

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  11. Temporální rozšíření pro PostgreSQL

    OpenAIRE

    Jelínek, Radek

    2015-01-01

    Tato práce se zabývá temporálním rozšířením databázového systému PostgreSQL. Čtenář se tu seznámí se stručným úvodem do temporálních databází, databázovým systémem PostgreSQL, návrhem rozšíření pro PostgreSQL a konkrétní implementací doplněnou příklady. Jsou tu uvedeny i používané temporální databázové systémy a využití temporálních databází v praxi. This thesis is focused on PostgreSQL database system. You can find here introducing to temporal databases, database system PostgreSQL, propos...

  12. Evaluation of spatial and temporal characteristics of GNSS-derived ZTD estimates in Nigeria

    Science.gov (United States)

    Isioye, Olalekan Adekunle; Combrinck, Ludwig; Botai, Joel

    2018-05-01

    This study presents an in-depth analysis to comprehend the spatial and temporal variability of zenith tropospheric delay (ZTD) over Nigeria during the period 2010-2014, using estimates from Global Navigation Satellite Systems (GNSS) data. GNSS data address the drawbacks in traditional techniques (e.g. radiosondes) by means of observing periodicities in ZTD. The ZTD estimates show weak spatial dependence among the stations, though this can be attributed to the density of stations in the network. Tidal oscillations are noticed at the GNSS stations. These oscillations have diurnal and semi-diurnal components. The diurnal components as seen from the ZTD are the principal source of the oscillations. This upshot may perhaps be ascribed to temporal variations in atmospheric water vapour on a diurnal scale. In addition, the diurnal ZTD cycles exhibited noteworthy seasonal dependence, with larger amplitudes in the rainy (wet) season and smaller ones in the harmattan (dry) season. Notably, the stations in the northern part of the country reach very high amplitudes in the months of June, July and August at the peak of the wet season, characterized by very high rainfall. This pinpoints the fact that in view of the small amount of atmospheric water vapour in the atmosphere, usually around 10%, its variations greatly influence the corresponding diurnal and seasonal discrepancies of ZTD. This study further affirms the prospective relevance of ground-based GNSS data to atmospheric studies. GNSS data analysis is therefore recommended as a tool for future exploration of Nigerian weather and climate.

  13. Is correction necessary when clinically determining quantitative cerebral perfusion parameters from multi-slice dynamic susceptibility contrast MR studies?

    International Nuclear Information System (INIS)

    Salluzzi, M; Frayne, R; Smith, M R

    2006-01-01

    Several groups have modified the standard singular value decomposition (SVD) algorithm to produce delay-insensitive cerebral blood flow (CBF) estimates from dynamic susceptibility contrast (DSC) perfusion studies. However, new dependences of CBF estimates on bolus arrival times and slice position in multi-slice studies have been recently recognized. These conflicting findings can be reconciled by accounting for several experimental and algorithmic factors. Using simulation and clinical studies, the non-simultaneous measurement of arterial and tissue concentration curves (relative slice position) in a multi-slice study is shown to affect time-related perfusion parameters, e.g. arterial-tissue-delay measurements. However, the current clinical impact of relative slice position on amplitude-related perfusion parameters, e.g. CBF, can be expected to be small unless any of the following conditions are present individually or in combination: (a) high concentration curve signal-to-noise ratios, (b) small tissue mean transit times, (c) narrow arterial input functions or (d) low temporal resolution of the DSC image sequence. Recent improvements in magnetic resonance (MR) technology can easily be expected to lead to scenarios where these effects become increasingly important sources of inaccuracy for all perfusion parameter estimates. We show that using Fourier interpolated (high temporal resolution) residue functions reduces the systematic error of the perfusion parameters obtained from multi-slice studies

  14. Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex

    Science.gov (United States)

    Lafer-Sousa, Rosa; Conway, Bevil R.

    2014-01-01

    Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314

  15. A Trace Data-Based Approach for an Accurate Estimation of Precise Utilization Maps in LTE

    Directory of Open Access Journals (Sweden)

    Almudena Sánchez

    2017-01-01

    Full Text Available For network planning and optimization purposes, mobile operators make use of Key Performance Indicators (KPIs, computed from Performance Measurements (PMs, to determine whether network performance needs to be improved. In current networks, PMs, and therefore KPIs, suffer from lack of precision due to an insufficient temporal and/or spatial granularity. In this work, an automatic method, based on data traces, is proposed to improve the accuracy of radio network utilization measurements collected in a Long-Term Evolution (LTE network. The method’s output is an accurate estimate of the spatial and temporal distribution for the cell utilization ratio that can be extended to other indicators. The method can be used to improve automatic network planning and optimization algorithms in a centralized Self-Organizing Network (SON entity, since potential issues can be more precisely detected and located inside a cell thanks to temporal and spatial precision. The proposed method is tested with real connection traces gathered in a large geographical area of a live LTE network and considers overload problems due to trace file size limitations, which is a key consideration when analysing a large network. Results show how these distributions provide a very detailed information of network utilization, compared to cell based statistics.

  16. Best-estimated multi-dimensional calculation during LB LOCA for APR1400

    International Nuclear Information System (INIS)

    Oh, D. Y.; Bang, Y. S.; Cheong, A. J.; Woong, S.; Korea, W.

    2010-01-01

    Best-estimated (BE) calculation with uncertainty quantification for the emergency core cooling system (ECCS) performance analysis during Loss of Coolant Accident (LOCA) is more broadly used in nuclear industries and regulations. In Korea, demand on regulatory audit calculation is continuously increasing to support the safety review for life extension, power up-rating and advanced nuclear reactor design. The thermal-hydraulic system code, MARS (Multi-dimensional Analysis of Reactor Safety), with multi-dimensional capability is used for audit calculation. It achieves to describe the complicated phenomena in reactor coolant system by very effectively consolidating the one dimensional RELAP5/MOD3 with the multidimensional COBRA-TF codes. The advanced power reactors (APR1400) to be evaluated has four separated hydraulic trains of the high pressure injection system (HPSI) with direct vessel injection (DVI) which is different from the existing commercial PWRs. Also, the therma-hydraulic behavior of DVI plant would be considerably different from that of a cold-leg safety injection since the low pressure safety injection system are eliminated and the high pressure safety flow are injected into the specific elevation of reactor vessel downcomer. The ECCS bypass induced by the downcomer boiling due to hot wall heating of reactor vessel during reflooding phase is one of the important phenomena which should be considered in DVI plants. Therefore, in this study, BE calculation with one-dimensional (1-D) and multi-dimensional (multi-D) MARS models during LBLOCA are performed for APR1400 plant. In the multi-D evaluation, the reactor vessel is modeled by multi-D components and the specific treatment of flow path inside reactor vessel, e.g., upper guide structure, is essential. The concept of hot zone is adopted to simulate the limiting thermal-hydraulic conditions surrounding hot rod, which is similar to hot channel in 1-D. Also, alternative treatment of the hot rods in multi-D is

  17. Large-Scale, Multi-Temporal Remote Sensing of Palaeo-River Networks: A Case Study from Northwest India and its Implications for the Indus Civilisation

    Directory of Open Access Journals (Sweden)

    Hector A. Orengo

    2017-07-01

    Full Text Available Remote sensing has considerable potential to contribute to the identification and reconstruction of lost hydrological systems and networks. Remote sensing-based reconstructions of palaeo-river networks have commonly employed single or limited time-span imagery, which limits their capacity to identify features in complex and varied landscape contexts. This paper presents a seasonal multi-temporal approach to the detection of palaeo-rivers over large areas based on long-term vegetation dynamics and spectral decomposition techniques. Twenty-eight years of Landsat 5 data, a total of 1711 multi-spectral images, have been bulk processed using Google Earth Engine© Code Editor and cloud computing infrastructure. The use of multi-temporal data has allowed us to overcome seasonal cultivation patterns and long-term visibility issues related to recent crop selection, extensive irrigation and land-use patterns. The application of this approach on the Sutlej-Yamuna interfluve (northwest India, a core area for the Bronze Age Indus Civilisation, has enabled the reconstruction of an unsuspectedly complex palaeo-river network comprising more than 8000 km of palaeo-channels. It has also enabled the definition of the morphology of these relict courses, which provides insights into the environmental conditions in which they operated. These new data will contribute to a better understanding of the settlement distribution and environmental settings in which this, often considered riverine, civilisation operated.

  18. Soil moisture modelling of a SMOS pixel: interest of using the PERSIANN database over the Valencia Anchor Station

    Directory of Open Access Journals (Sweden)

    S. Juglea

    2010-08-01

    Full Text Available In the framework of Soil Moisture and Ocean Salinity (SMOS Calibration/Validation (Cal/Val activities, this study addresses the use of the PERSIANN-CCS1database in hydrological applications to accurately simulate a whole SMOS pixel by representing the spatial and temporal heterogeneity of the soil moisture fields over a wide area (50×50 km2. The study focuses on the Valencia Anchor Station (VAS experimental site, in Spain, which is one of the main SMOS Cal/Val sites in Europe.

    A faithful representation of the soil moisture distribution at SMOS pixel scale (50×50 km2 requires an accurate estimation of the amount and temporal/spatial distribution of precipitation. To quantify the gain of using the comprehensive PERSIANN database instead of sparsely distributed rain gauge measurements, comparisons between in situ observations and satellite rainfall data are done both at point and areal scale. An overestimation of the satellite rainfall amounts is observed in most of the cases (about 66% but the precipitation occurrences are in general retrieved (about 67%.

    To simulate the high variability in space and time of surface soil moisture, a Soil Vegetation Atmosphere Transfer (SVAT model – ISBA (Interactions between Soil Biosphere Atmosphere is used. The interest of using satellite rainfall estimates as well as the influence that the precipitation events can induce on the modelling of the water content in the soil is depicted by a comparison between different soil moisture data. Point-like and spatialized simulated data using rain gauge observations or PERSIANN – CCS database as well as ground measurements are used. It is shown that a good adequacy is reached in most part of the year, the precipitation differences having less impact upon the simulated soil moisture. The behaviour of simulated surface soil moisture at SMOS scale is verified by the use of remote sensing data from the Advanced

  19. Measuring the volume of temporal lobe in healthy Chinese adults of the Han nationality on the high-resolution MRI

    International Nuclear Information System (INIS)

    Jia Kefeng; Wu Li; Duan Hui; Han Dan; Chen Nan; Li Kuncheng

    2010-01-01

    Objective: To explore the morphological features of temporal lobe of healthy Chinese Han adults on the high-resolution MRI and provide morphological data of temporal lobe for the construction of database for Chinese Standard Brain. Methods: This is a clinical multi-center study. Three hundred healthy Chinese volunteers (male 150, and female 150) recruited from 15 hospitals were divided equally into five groups according to their age, i.e., 18-30 (Group A), 31-40 (Group B), 41-50 (Group C), 51- 60(Group D), 61-70(Group E). All subjects were scanned using T 1 WI 3D MPRAGE sequence and volumes of standardized temporal lobe were collected. The bilateral volumes of standardized temporal lobe were compared by variance analysis between male and female subjects and among five age groups. Results: The mean volumes of left and right temporal lobe were (97 126±15 703) mm 3 and (97 015 ± 15 545) mm 3 respectively for men, and (95 123 ± 14 564) mm 3 and (96 423 ± 13 407) mm 3 for women. The difference temporal lobe volume between male and female wasn't significant on the same side (F=1.336, 0.127, P= 0.249, 0.722). The left temporal lobe volumes of Group A-E were (93 873±13 351), (95 566± 11 964), (10 1890 ± 14 511), (93 972 ± 14 050) and (95 636 ± 19 864) mm 3 respectively, and those on the right side were (93 409 ± 10 984), (98 158 ± 16 392), (102 079 ± 15 112), (95 448 ± 11 123) and (94 658 ± 16 928) mm 3 . There were significant differences among 5 groups between left and right temporal lobe volume(F=2.940, 3.514, P=0.021, 0.008). Further pairwise comparison revealed that left and right temporal lobe volume in Group C is higher than those of Group A and D (P 0.05). Conclusion: High-resolution MRI could offer detailed images and precise morphological data of temporal lobe, which provides morphological data of temporal lobe for the construction of database for Chinese Standard Brain. (authors)

  20. Updated folate data in the Dutch Food Composition Database and implications for intake estimates

    Directory of Open Access Journals (Sweden)

    Susanne Westenbrink

    2012-04-01

    Full Text Available Background and objective: Nutrient values are influenced by the analytical method used. Food folate measured by high performance liquid chromatography (HPLC or by microbiological assay (MA yield different results, with in general higher results from MA than from HPLC. This leads to the question of how to deal with different analytical methods in compiling standardised and internationally comparable food composition databases? A recent inventory on folate in European food composition databases indicated that currently MA is more widely used than HPCL. Since older Dutch values are produced by HPLC and newer values by MA, analytical methods and procedures for compiling folate data in the Dutch Food Composition Database (NEVO were reconsidered and folate values were updated. This article describes the impact of this revision of folate values in the NEVO database as well as the expected impact on the folate intake assessment in the Dutch National Food Consumption Survey (DNFCS. Design: The folate values were revised by replacing HPLC with MA values from recent Dutch analyses. Previously MA folate values taken from foreign food composition tables had been recalculated to the HPLC level, assuming a 27% lower value from HPLC analyses. These recalculated values were replaced by the original MA values. Dutch HPLC and MA values were compared to each other. Folate intake was assessed for a subgroup within the DNFCS to estimate the impact of the update. Results: In the updated NEVO database nearly all folate values were produced by MA or derived from MA values which resulted in an average increase of 24%. The median habitual folate intake in young children was increased by 11–15% using the updated folate values. Conclusion: The current approach for folate in NEVO resulted in more transparency in data production and documentation and higher comparability among European databases. Results of food consumption surveys are expected to show higher folate intakes

  1. Reconstructing the temporal ordering of biological samples using microarray data.

    Science.gov (United States)

    Magwene, Paul M; Lizardi, Paul; Kim, Junhyong

    2003-05-01

    Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.

  2. INIST: databases reorientation

    International Nuclear Information System (INIS)

    Bidet, J.C.

    1995-01-01

    INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab

  3. Estimating Activity Patterns Using Spatio-temporal Data of Cellphone Networks

    Directory of Open Access Journals (Sweden)

    Zahedi Seyedmostafa

    2016-01-01

    Full Text Available The tendency towards using activity-based models to predict trip demand has increased dramatically over recent years, but these models have suffered insufficient data for calibration. This paper discusses ways to process the cellphone spatio-temporal data in a manner that makes it comprehensible for traffic interpretations and proposes methods on how to infer urban mobility and activity patterns from the aforementioned data. Movements of each subscriber is described by a sequence of stays and trips and each stay is labeled by an activity. The type of activities are estimated using features such as land use, duration of stay, frequency of visit, arrival time to that activity and its distance from home. Finally, the chains of trips are identified and different patterns that citizens follow to participate in activities are determined. The data comprises 144 million records of the location of 300,000 citizens of Shiraz at five-minute intervals.

  4. A Spatio-Temporal Enhanced Metadata Model for Interdisciplinary Instant Point Observations in Smart Cities

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-02-01

    Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.

  5. Adaptive data migration scheme with facilitator database and multi-tier distributed storage in LHD

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Masaki, Ohsuna; Mamoru, Kojima; Setsuo, Imazu; Miki, Nonomura; Kenji, Watanabe; Masayoshi, Moriya; Yoshio, Nagayama; Kazuo, Kawahata

    2008-01-01

    Recent 'data explosion' induces the demand for high flexibility of storage extension and data migration. The data amount of LHD plasma diagnostics has grown 4.6 times bigger than that of three years before. Frequent migration or replication between plenty of distributed storage becomes mandatory, and thus increases the human operational costs. To reduce them computationally, a new adaptive migration scheme has been developed on LHD's multi-tier distributed storage. So-called the HSM (Hierarchical Storage Management) software usually adopts a low-level cache mechanism or simple watermarks for triggering the data stage-in and out between two storage devices. However, the new scheme can deal with a number of distributed storage by the facilitator database that manages the whole data locations with their access histories and retrieval priorities. Not only the inter-tier migration but also the intra-tier replication and moving are even manageable so that it can be a big help in extending or replacing storage equipment. The access history of each data object is also utilized to optimize the volume size of fast and costly RAID, in addition to a normal cache effect for frequently retrieved data. The new scheme has been verified its effectiveness so that LHD multi-tier distributed storage and other next-generation experiments can obtain such the flexible expandability

  6. Simple analytical expression for crosstalk estimation in homogeneous trench-assisted multi-core fibers

    DEFF Research Database (Denmark)

    Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa

    2014-01-01

    An analytical expression for the mode coupling coe cient in homogeneous trench-assisted multi-core fibers is derived, which has a sim- ple relationship with the one in normal step-index structures. The amount of inter-core crosstalk reduction (in dB) with trench-assisted structures compared...... to the one with normal step-index structures can then be written by a simple expression. Comparison with numerical simulations confirms that the obtained analytical expression has very good accuracy for crosstalk estimation. The crosstalk properties in trench-assisted multi-core fibers, such as crosstalk...... dependence on core pitch and wavelength-dependent crosstalk, can be obtained by this simple analytical expression....

  7. The Probability of Default Under IFRS 9: Multi-period Estimation and Macroeconomic Forecast

    Directory of Open Access Journals (Sweden)

    Tomáš Vaněk

    2017-01-01

    Full Text Available In this paper we propose a straightforward, flexible and intuitive computational framework for the multi-period probability of default estimation incorporating macroeconomic forecasts. The concept is based on Markov models, the estimated economic adjustment coefficient and the official economic forecasts of the Czech National Bank. The economic forecasts are taken into account in a separate step to better distinguish between idiosyncratic and systemic risk. This approach is also attractive from the interpretational point of view. The proposed framework can be used especially when calculating lifetime expected credit losses under IFRS 9.

  8. Evaluation of stature estimation from the database for forensic anthropology.

    Science.gov (United States)

    Wilson, Rebecca J; Herrmann, Nicholas P; Jantz, Lee Meadows

    2010-05-01

    Trotter and Gleser's (1-3) stature equations, conventionally used to estimate stature, are not appropriate to use in the modern forensic context. In this study, stature is assessed with a modern (birth years after 1944) American sample (N = 242) derived from the National Institute of Justice Database for Forensic Anthropology in the United States and the Forensic Anthropology Databank. New stature formulae have been calculated using forensic stature (FSTAT) and a combined dataset of forensic, cadaver, and measured statures referred to as Any Stature (ASTAT). The new FSTAT-based equations had an improved accuracy in Blacks with little improvement over Ousley's (4) equations for Whites. ASTAT-based equations performed equal to those of FSTAT equations and may be more appropriate, because they reflect both the variation in reported statures and in cadaver statures. It is essential to use not only equations based on forensic statures, but also equations based on modern samples.

  9. Multi-temporal analysis of slope movements in the Southern Apennines of Italy

    Science.gov (United States)

    Parise, M.

    2012-04-01

    Many types of thematic maps dealing with slope movements have been proposed in the scientific literature to describe the features and activity of landslides. One of the most common is the classical landslide inventory map: this can be defined as a photograph of the landscape at a given time, that is the moment of the field surveys, or the date of the air photographs and/or satellite images used for mapping. Unless further data (such as dates of occurrence of the landslides, frequency of movement, etc.) are not added, it does nothing more than depicting the instability situation at that given time. In order to reach more insights into the history and evolution of unstable slopes, a multi-time approach must be performed. This can be carried out through a multi-temporal analysis, based upon aerial photo interpretation of different years, possibly integrated by field surveys. Production of landslide inventory map for each available set of air photos results in the final output of landslide activity maps (LAMs), deriving from comparison of the individual inventory map. LAMs provide insights into the evolution of the landslide process, allowing to reconstruct a relative history of the mass movement, and to highlight the most active sectors in time. All these information may result extremely useful to correlate likely movements to anthropogenic activity or specific triggering factors, such as a seismic event or a rainstorm. In addition, LAMs can also be of effective use in evaluating the efficiency of remediation works. The Southern Apennines of Italy are intensely affected by a variety of slope movements, that interest very different settings and are at the origin of severe damage to the built-up environments, claiming every year a high number of casualties. Notwithstanding the availability of landslide maps for the whole Italian territory, with very good detail at local sites of interest, what is often lacking over the country is a thorough knowledge of the overall

  10. Practical querying of temporal data via OWL 2 QL and SQL: 2011

    CSIR Research Space (South Africa)

    Klarman, S

    2013-12-01

    Full Text Available We develop a practical approach to querying temporal data stored in temporal SQL:2011 databases through the semantic layer of OWL 2 QL ontologies. An interval-based temporal query language (TQL), which we propose for this task, is defined via...

  11. Rice monitoring with multi-temporal and dual-polarimetric TerraSAR-X data

    Science.gov (United States)

    Koppe, Wolfgang; Gnyp, Martin L.; Hütt, Christoph; Yao, Yinkun; Miao, Yuxin; Chen, Xinping; Bareth, Georg

    2013-04-01

    This study assesses the use of TerraSAR-X data for monitoring rice cultivation in the Sanjiang Plain in Heilongjiang Province, Northeast China. The main objective is the understanding of the coherent co-polarized X-band backscattering signature of rice at different phenological stages in order to retrieve growth status. For this, multi-temporal dual polarimetric TerraSAR-X High Resolution SpotLight data (HH/VV) as well as single polarized StripMap (VV) data were acquired over the test site. In conjunction with the satellite data acquisition, a ground truth field campaign was carried out. The backscattering coefficients at HH and VV of the observed fields were extracted on the different dates and analysed as a function of rice phenology to provide a physical interpretation for the co-polarized backscatter response in a temporal and spatial manner. Then, a correlation analysis was carried out between TerraSAR-X backscattering signal and rice biomass of stem, leaf and head to evaluate the relationship with different vertical layers within the rice vegetation. HH and VV signatures show two phases of backscatter increase, one at the beginning up to 46 days after transplanting and a second one from 80 days after transplanting onwards. The first increase is related to increasing double bounce reflection from the surface-stem interaction. Then, a decreasing trend of both polarizations can be observed due to signal attenuation by increasing leaf density. A second slight increase is observed during senescence. Correlation analysis showed a significant relationship with different vertical layers at different phenological stages which prove the physical interpretation of X-band backscatter of rice. The seasonal backscatter coefficient showed that X-band is highly sensitive to changes in size, orientation and density of the dominant elements in the upper canopy.

  12. How and to what extent does precipitation on multi-temporal scales and soil moisture at different depths determine carbon flux responses in a water-limited grassland ecosystem?

    Science.gov (United States)

    Fang, Qingqing; Wang, Guoqiang; Xue, Baolin; Liu, Tingxi; Kiem, Anthony

    2018-04-23

    In water-limited ecosystems, hydrological processes significantly affect the carbon flux. The semi-arid grassland ecosystem is particularly sensitive to variations in precipitation (PRE) and soil moisture content (SMC), but to what extent is not fully understood. In this study, we estimated and analyzed how hydrological variables, especially PRE at multi-temporal scales (diurnal, monthly, phenological-related, and seasonal) and SMC at different soil depths (0-20 cm, 20-40 cm, 40-60 cm, 60-80 cm) affect the carbon flux. For these aims, eddy covariance data were combined with a Vegetation Photosynthesis and Respiration Model (VPRM) to simulate the regional gross primary productivity (GPP), ecosystem respiration (R eco ), and net ecosystem exchange of CO 2 (NEE). Interestingly, carbon flux showed no relationship with diurnal PRE or phenological-related PRE (precipitation in the growing season and non-growing season). However, carbon flux was significantly related to monthly PRE and to seasonal PRE (spring + summer, autumn). The GPP, R eco , and NEE increased in spring and summer but decreased in autumn with increasing precipitation due to the combined effect of salinization in autumn. The GPP, R eco , and NEE were more responsive to SMC at 0-20 cm depth than at deeper depths due to the shorter roots of herbaceous vegetation. The NEE increased with increasing monthly PRE because soil microbes responded more quickly than plants. The NEE significantly decreased with increasing SMC in shallow surface due to a hysteresis effect on water transport. The results of our study highlight the complex processes that determine how and to what extent PRE at multi-temporal scale and SMC at different depths affect the carbon flux response in a water-limited grassland. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    Science.gov (United States)

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  14. MAP-MRF-Based Super-Resolution Reconstruction Approach for Coded Aperture Compressive Temporal Imaging

    Directory of Open Access Journals (Sweden)

    Tinghua Zhang

    2018-02-01

    Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.

  15. Forest Structure Characterization Using Jpl's UAVSAR Multi-Baseline Polarimetric SAR Interferometry and Tomography

    Science.gov (United States)

    Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi

    2013-01-01

    This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.

  16. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    Science.gov (United States)

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  17. Multi-level analyses of spatial and temporal determinants for dengue infection.

    Science.gov (United States)

    Vanwambeke, Sophie O; van Benthem, Birgit H B; Khantikul, Nardlada; Burghoorn-Maas, Chantal; Panart, Kamolwan; Oskam, Linda; Lambin, Eric F; Somboon, Pradya

    2006-01-18

    Dengue is a mosquito-borne viral infection that is now endemic in most tropical countries. In Thailand, dengue fever/dengue hemorrhagic fever is a leading cause of hospitalization and death among children. A longitudinal study among 1750 people in two rural and one urban sites in northern Thailand from 2001 to 2003 studied spatial and temporal determinants for recent dengue infection at three levels (time, individual and household). Determinants for dengue infection were measured by questionnaire, land-cover maps and GIS. IgM antibodies against dengue were detected by ELISA. Three-level multi-level analysis was used to study the risk determinants of recent dengue infection. Rates of recent dengue infection varied substantially in time from 4 to 30%, peaking in 2002. Determinants for recent dengue infection differed per site. Spatial clustering was observed, demonstrating variation in local infection patterns. Most of the variation in recent dengue infection was explained at the time-period level. Location of a person and the environment around the house (including irrigated fields and orchards) were important determinants for recent dengue infection. We showed the focal nature of asymptomatic dengue infections. The great variation of determinants for recent dengue infection in space and time should be taken into account when designing local dengue control programs.

  18. Multi-temporal SAR interferometry reveals acceleration of bridge sinking before collapse

    Science.gov (United States)

    Sousa, J. J.; Bastos, L.

    2013-03-01

    On the night of 4 March 2001, at Entre-os-Rios (Northern Portugal), the Hintze Ribeiro centennial bridge collapsed killing 59 people traveling in a bus and three cars that were crossing the Douro River. According to the national authorities, the collapse was due to two decades of uncontrolled sand extraction which compromised the stability of the bridge's pillars, together with underestimating the warnings from divers and technicians. In this work we do not intend to corroborate or contradict the official version of the accident's causes, but only demonstrate the potential of Multi-Temporal Interferometric techniques for detection and monitoring of deformations in structures such as bridges, and consequently the usefulness of the derived information in some type of early warning system to help prevent new catastrophic events. Based on the analysis of 52 ERS-1/2 covering the period from May 1995 to the fatal occurrence, we were able to detect significant movements, reaching rates of 20 mm yr-1, in the section of the bridge that fell into the Douro River, which are obvious signs of the bridge's instability. These promising results demonstrate that with the new high-resolution synthetic aperture radar satellite scenes it is possible to develop interferometric based methodologies for structural health monitoring.

  19. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    Science.gov (United States)

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Long-term ground deformation patterns of Bucharest using multi-temporal InSAR and multivariate dynamic analyses: a possible transpressional system?

    OpenAIRE

    Arma?, Iuliana; Mendes, Diana A.; Popa, R?zvan-Gabriel; Gheorghe, Mihaela; Popovici, Diana

    2017-01-01

    WOS:000395576200001 (Nº de Acesso Web of Science) The aim of this exploratory research is to capture spatial evolution patterns in the Bucharest metropolitan area using sets of single polarised synthetic aperture radar (SAR) satellite data and multi-temporal radar interferometry. Three sets of SAR data acquired during the years 1992–2010 from ERS-1/-2 and ENVISAT, and 2011–2014 from TerraSAR-X satellites were used in conjunction with the Small Baseline Subset (SBAS) and persistent scattere...

  1. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  2. AUTOMATED WETLAND DELINEATION FROM MULTI-FREQUENCY AND MULTI-POLARIZED SAR IMAGES IN HIGH TEMPORAL AND SPATIAL RESOLUTION

    Directory of Open Access Journals (Sweden)

    L. Moser

    2016-06-01

    Full Text Available Water scarcity is one of the main challenges posed by the changing climate. Especially in semi-arid regions where water reservoirs are filled during the very short rainy season, but have to store enough water for the extremely long dry season, the intelligent handling of water resources is vital. This study focusses on Lac Bam in Burkina Faso, which is the largest natural lake of the country and of high importance for the local inhabitants for irrigated farming, animal watering, and extraction of water for drinking and sanitation. With respect to the competition for water resources an independent area-wide monitoring system is essential for the acceptance of any decision maker. The following contribution introduces a weather and illumination independent monitoring system for the automated wetland delineation with a high temporal (about two weeks and a high spatial sampling (about five meters. The similarities of the multispectral and multi-polarized SAR acquisitions by RADARSAT-2 and TerraSAR-X are studied as well as the differences. The results indicate that even basic approaches without pre-classification time series analysis or post-classification filtering are already enough to establish a monitoring system of prime importance for a whole region.

  3. Evaluating spatial- and temporal-oriented multi-dimensional visualization techniques

    Directory of Open Access Journals (Sweden)

    Chong Ho Yu

    2003-07-01

    Full Text Available Visualization tools are said to be helpful for researchers to unveil hidden patterns and..relationships among variables, and also for teachers to present abstract statistical concepts and..complicated data structures in a concrete manner. However, higher-dimension visualization..techniques can be confusing and even misleading, especially when human-instrument interface..and cognitive issues are under-applied. In this article, the efficacy of function-based, datadriven,..spatial-oriented, and temporal-oriented visualization techniques are discussed based..upon extensive review. Readers can find practical implications for both research and..instructional practices. For research purposes, the spatial-based graphs, such as Trellis displays..in S-Plus, are preferable over the temporal-based displays, such as the 3D animated plot in..SAS/Insight. For teaching purposes, the temporal-based displays, such as the 3D animation plot..in Maple, seem to have advantages over the spatial-based graphs, such as the 3D triangular..coordinate plot in SyStat.

  4. Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries

    NARCIS (Netherlands)

    Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.

    2012-01-01

    A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or

  5. A real time multi-server multi-client coherent database for a new high voltage system

    International Nuclear Information System (INIS)

    Gorbics, M.; Green, M.

    1995-01-01

    A high voltage system has been designed to allow multiple users (clients) access to the database of measured values and settings. This database is actively maintained in real time for a given mainframe containing multiple modules each having their own database. With limited CPU nd memory resources the mainframe system provides a data coherency scheme for multiple clients which (1) allows the client to determine when and what values need to be updated, (2) allows for changes from one client to be detected by another client, and (3) does not depend on the mainframe system tracking client accesses

  6. An adaptive spatio-temporal smoothing model for estimating trends and step changes in disease risk

    OpenAIRE

    Rushworth, Alastair; Lee, Duncan; Sarran, Christophe

    2014-01-01

    Statistical models used to estimate the spatio-temporal pattern in disease\\ud risk from areal unit data represent the risk surface for each time period with known\\ud covariates and a set of spatially smooth random effects. The latter act as a proxy\\ud for unmeasured spatial confounding, whose spatial structure is often characterised by\\ud a spatially smooth evolution between some pairs of adjacent areal units while other\\ud pairs exhibit large step changes. This spatial heterogeneity is not c...

  7. A Multi-Temporal Analyses of Land Surface Temperature Using Landsat-8 Data and Open Source Software: The Case Study of Modena, Italy

    Directory of Open Access Journals (Sweden)

    Tommaso Barbieri

    2018-05-01

    Full Text Available The Urban Heat Island (UHI phenomenon, namely urban areas where the atmospheric temperature is significantly higher than in the surrounding rural areas, is currently a very well-known topic both in the scientific community and in public debates. Growing urbanization is one of the anthropic causes of UHI. The UHI phenomenon has a negative impact on the life quality of the local population (thermal discomfort, summer thermal shock, etc., thus investigations and analyses on this topic are really useful and important for correct and sustainable urban planning; this study is included in this context. A multi-temporal analysis was performed in the municipality of Modena (Italy to identify and estimate the Surface Urban Heat Island (SUHI, strictly correlated to the UHI phenomenon from 2014 to 2017. For this purpose, Landsat-8 satellite images were processed with Quantum Geographic Information System (QGIS to obtain the Land Surface Temperature (LST and the Normalized Difference Vegetation Index (NDVI. For every pixel, LST and NDVI values of three regions of interest (ROI, i.e., Countryside, Suburbs, and City Center were extracted and their correlations were investigated. A maximum variation of 6.4 °C in the LST values between City Center and Countryside was highlighted, confirming the presence of the SUHI phenomenon even in a medium-sized municipality like Modena. The implemented procedure demonstrates that satellite data are suitable for SUHI identification and estimation, therefore it could be a useful tool for public administration for urban planning policies.

  8. Detection of geothermal anomalies in Tengchong, Yunnan Province, China from MODIS multi-temporal night LST imagery

    Science.gov (United States)

    Li, H.; Kusky, T. M.; Peng, S.; Zhu, M.

    2012-12-01

    Thermal infrared (TIR) remote sensing is an important technique in the exploration of geothermal resources. In this study, a geothermal survey is conducted in Tengchong area of Yunnan province in China using multi-temporal MODIS LST (Land Surface Temperature). The monthly night MODIS LST data from Mar. 2000 to Mar. 2011 of the study area were collected and analyzed. The 132 month average LST map was derived and three geothermal anomalies were identified. The findings of this study agree well with the results from relative geothermal gradient measurements. Finally, we conclude that TIR remote sensing is a cost-effective technique to detect geothermal anomalies. Combining TIR remote sensing with geological analysis and the understanding of geothermal mechanism is an accurate and efficient approach to geothermal area detection.

  9. A method for state-of-charge estimation of Li-ion batteries based on multi-model switching strategy

    International Nuclear Information System (INIS)

    Wang, Yujie; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    Highlights: • Build a multi-model switching SOC estimate method for Li-ion batteries. • Build an improved interpretative structural modeling method for model switching. • The feedback strategy of bus delay is applied to improve the real-time performance. • The EKF method is used for SOC estimation to improve the estimated accuracy. - Abstract: The accurate state-of-charge (SOC) estimation and real-time performance are critical evaluation indexes for Li-ion battery management systems (BMS). High accuracy algorithms often take long program execution time (PET) in the resource-constrained embedded application systems, which will undoubtedly lead to the decrease of the time slots of other processes, thereby reduce the overall performance of BMS. Considering the resource optimization and the computational load balance, this paper proposes a multi-model switching SOC estimation method for Li-ion batteries. Four typical battery models are employed to build a close-loop SOC estimation system. The extended Kalman filter (EKF) method is employed to eliminate the effect of the current noise and improve the accuracy of SOC. The experiments under dynamic current conditions are conducted to verify the accuracy and real-time performance of the proposed method. The experimental results indicate that accurate estimation results and reasonable PET can be obtained by the proposed method

  10. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  11. Spatio-temporal Characteristics of Land Use Land Cover Change Driven by Large Scale Land Transactions in Cambodia

    Science.gov (United States)

    Ghosh, A.; Smith, J. C.; Hijmans, R. J.

    2017-12-01

    Since mid-1990s, the Cambodian government granted nearly 300 `Economic Land Concessions' (ELCs), occupying approximately 2.3 million ha to foreign and domestic organizations (primarily agribusinesses). The majority of Cambodian ELC deals have been issued in areas of both relatively low population density and low agricultural productivity, dominated by smallholder production. These regions often contain highly biodiverse areas, thereby increasing the ecological cost associated with land clearing for extractive purposes. These large-scale land transactions have also resulted in substantial and rapid changes in land-use patterns and agriculture practices by smallholder farmers. In this study, we investigated the spatio-temporal characteristics of land use change associated with large-scale land transactions across Cambodia using multi-temporal multi-reolution remote sensing data. We identified major regions of deforestation during the last two decades using Landsat archive, global forest change data (2000-2014) and georeferenced database of ELC deals. We then mapped the deforestation and land clearing within ELC boundaries as well as areas bordering or near ELCs to quantify the impact of ELCs on local communities. Using time-series from MODIS Vegetation Indices products for the study period, we also estimated the time period over which any particular ELC deal initiated its proposed activity. We found evidence of similar patterns of land use change outside the boundaries of ELC deals which may be associated with i) illegal land encroachments by ELCs and/or ii) new agricultural practices adopted by local farmers near ELC boundaries. We also detected significant time gaps between ELC deal granting dates and initiation of land clearing for ELC purposes. Interestingly, we also found that not all designated areas for ELCs were put into effect indicating the possible proliferation of speculative land deals. This study demonstrates the potential of remote sensing techniques

  12. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries

    International Nuclear Information System (INIS)

    Mahfouz, Z.; Verloock, L.; Joseph, W.; Tanghe, E.; Gati, A.; Wiart, J.; Lautru, D.; Hanna, V. F.; Martens, L.

    2013-01-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed down-link packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days. (authors)

  13. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    Science.gov (United States)

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  14. Mapping Plastic-Mulched Farmland with Multi-Temporal Landsat-8 Data

    Directory of Open Access Journals (Sweden)

    Hasituya

    2017-06-01

    Full Text Available Using plastic mulching for farmland is booming around the world. Despite its benefit of protecting crops from unfavorable conditions and increasing crop yield, the massive use of the plastic-mulching technique causes many environmental problems. Therefore, timely and effective mapping of plastic-mulched farmland (PMF is of great interest to policy-makers to leverage the trade-off between economic profit and adverse environmental impacts. However, it is still challenging to implement remote-sensing-based PMF mapping due to its changing spectral characteristics with the growing seasons of crops and geographic regions. In this study, we examined the potential of multi-temporal Landsat-8 imagery for mapping PMF. To this end, we gathered the information of spectra, textures, indices, and thermal features into random forest (RF and support vector machine (SVM algorithms in order to select the common characteristics for distinguishing PMF from other land cover types. The experiment was conducted in Jizhou, Hebei Province. The results demonstrated that the spectral features and indices features of NDVI (normalized difference vegetation index, GI (greenness index, and textural features of mean are more important than the other features for mapping PMF in Jizhou. With that, the optimal period for mapping PMF is in April, followed by May. A combination of these two times (April and May is better than later in the season. The highest overall, producer’s, and user’s accuracies achieved were 97.01%, 92.48%, and 96.40% in Jizhou, respectively.

  15. Estimation and analysis of the short-term variations of multi-GNSS receiver differential code biases using global ionosphere maps

    Science.gov (United States)

    Li, Min; Yuan, Yunbin; Wang, Ningbo; Liu, Teng; Chen, Yongchang

    2017-12-01

    Care should be taken to minimize the adverse impact of differential code biases (DCBs) on global navigation satellite systems (GNSS)-derived ionospheric information determinations. For the sake of convenience, satellite and receiver DCB products provided by the International GNSS Service (IGS) are treated as constants over a period of 24 h (Li et al. (2014)). However, if DCB estimates show remarkable intra-day variability, the DCBs estimated as constants over 1-day period will partially account for ionospheric modeling error; in this case DCBs will be required to be estimated over shorter time period. Therefore, it is important to further gain insight into the short-term variation characteristics of receiver DCBs. In this contribution, the IGS combined global ionospheric maps and the German Aerospace Center (DLR)-provided satellite DCBs are used in the improved method to determine the multi-GNSS receiver DCBs with an hourly time resolution. The intra-day stability of the receiver DCBs is thereby analyzed in detail. Based on 1 month of data collected within the multi-GNSS experiment of the IGS, a good agreement within the receiver DCBs is found between the resulting receiver DCB estimates and multi-GNSS DCB products from the DLR at a level of 0.24 ns for GPS, 0.28 ns for GLONASS, 0.28 ns for BDS, and 0.30 ns for Galileo. Although most of the receiver DCBs are relatively stable over a 1-day period, large fluctuations (more than 9 ns between two consecutive hours) within the receiver DCBs can be found. We also demonstrate the impact of the significant short-term variations in receiver DCBs on the extraction of ionospheric total electron content (TEC), at a level of 12.96 TECu (TEC unit). Compared to daily receiver DCB estimates, the hourly DCB estimates obtained from this study can reflect the short-term variations of the DCB estimates more dedicatedly. The main conclusion is that preliminary analysis of characteristics of receiver DCB variations over short

  16. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Valencia, Frank Dan

    Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...

  17. Measurement of traffic parameters in image sequence using spatio-temporal information

    International Nuclear Information System (INIS)

    Lee, Daeho; Park, Youngtae

    2008-01-01

    This paper proposes a novel method for measurement of traffic parameters, such as the number of passed vehicles, velocity and occupancy rate, by video image analysis. The method is based on a region classification followed by spatio-temporal image analysis. Local detection region images in traffic lanes are classified into one of four categories: the road, the vehicle, the reflection and the shadow, by using statistical and structural features. Misclassification at a frame is corrected by using temporally correlated features of vehicles in the spatio-temporal image. This capability of error correction results in the accurate estimation of traffic parameters even in high traffic congestion. Also headlight detection is employed for nighttime operation. Experimental results show that the accuracy is more than 94% in our test database of diverse operating conditions such as daytime, shadowy daytime, highway, urban way, rural way, rainy day, snowy day, dusk and nighttime. The average processing time is 30 ms per frame when four traffic lanes are processed, and real-time operation could be realized while ensuring robust detection performance even for high-speed vehicles up to 150 km h −1

  18. Evaluation of outbreak detection performance using multi-stream syndromic surveillance for influenza-like illness in rural Hubei Province, China: a temporal simulation model based on healthcare-seeking behaviors.

    Directory of Open Access Journals (Sweden)

    Yunzhou Fan

    Full Text Available BACKGROUND: Syndromic surveillance promotes the early detection of diseases outbreaks. Although syndromic surveillance has increased in developing countries, performance on outbreak detection, particularly in cases of multi-stream surveillance, has scarcely been evaluated in rural areas. OBJECTIVE: This study introduces a temporal simulation model based on healthcare-seeking behaviors to evaluate the performance of multi-stream syndromic surveillance for influenza-like illness. METHODS: Data were obtained in six towns of rural Hubei Province, China, from April 2012 to June 2013. A Susceptible-Exposed-Infectious-Recovered model generated 27 scenarios of simulated influenza A (H1N1 outbreaks, which were converted into corresponding simulated syndromic datasets through the healthcare-behaviors model. We then superimposed converted syndromic datasets onto the baselines obtained to create the testing datasets. Outbreak performance of single-stream surveillance of clinic visit, frequency of over the counter drug purchases, school absenteeism, and multi-stream surveillance of their combinations were evaluated using receiver operating characteristic curves and activity monitoring operation curves. RESULTS: In the six towns examined, clinic visit surveillance and school absenteeism surveillance exhibited superior performances of outbreak detection than over the counter drug purchase frequency surveillance; the performance of multi-stream surveillance was preferable to signal-stream surveillance, particularly at low specificity (Sp <90%. CONCLUSIONS: The temporal simulation model based on healthcare-seeking behaviors offers an accessible method for evaluating the performance of multi-stream surveillance.

  19. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Directory of Open Access Journals (Sweden)

    Qian Li

    Full Text Available BACKGROUND: Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. METHODOLOGY: We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671 between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. CONCLUSIONS: This article proposes a network-based multi-target computational estimation

  20. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Science.gov (United States)

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by

  1. Data pre-processing for database marketing

    OpenAIRE

    Pinto, Filipe; Santos, Manuel Filipe; Cortez, Paulo; Quintela, Hélder

    2004-01-01

    To increase effectiveness in their marketing and Customer Relationship Manager activities, many organizations are adopting strategies of Database Marketing (DBM). Nowadays, DBM faces new challenges in business knowledge since current strategies are mainly approached by classical statistical inference, which may fail when complex, multi-dimensional and incomplete data is available. An alternative is to use Knowledge Discovery from Databases (KDD), which aims at automatic extraction of useful p...

  2. Analysis of Landslide Kinematics using Multi-temporal UAV Imagery, La Honda, California

    Science.gov (United States)

    Carey, J.; Pickering, A.; Prentice, C. S.; Pinter, N.; DeLong, S.

    2017-12-01

    High-resolution topographic data are vital to studies of earth-surface processes. The combination of unmanned aerial vehicle (UAV) photography and structure-from-motion (SfM) digital photogrammetry provide a quickly deployable and cost-effective method for monitoring geomorphic change and landscape evolution. We acquired imagery of an active landslide in La Honda, California using a GPS-enabled quadcopter UAV with a 12.4 megapixel camera. Deep-seated landslides were previously documented in this region during the winter of 1997-98, with movement recurring and the landslide expanding during the winters of 2004-05 and 2005-06. This study documents the kinematics of a new and separate landslide immediately adjacent to the previous ones, throughout the winter of 2016-17. The roughly triangular-shaped, deep-seated landslide covers an area of approximately 10,000 m2. The area is underlain by SW dipping late Miocene to Pliocene sandstones and mudstones. A 3 m high head scarp stretches along the northeast portion of the slide for approximately 100 m. Internally, the direction of movement is towards the southwest, with two prominent NW-SE striking extensional grabens and numerous tension cracks across the landslide body. Here we calculate displaced landslide volumes and surface displacements from multi-temporal UAV surveys. Photogrammetric reconstruction of UAV/SfM-derived point clouds allowed creation of six digital elevation models (DEMs) with spatial resolutions ranging from 3 to 15 cm per pixel. We derived displacement magnitude, direction and rate by comparing multiple generations of DEMs and orthophotos, and estimated displaced volumes by differencing subsequent DEMs. We then correlated displacements with total rainfall and rainfall intensity measurements. Detailed geomorphic maps identify major landslide features, documenting dominant surface processes. Additionally, we compare the accuracy of the UAV/SfM-derived DEM with a DEM sourced from a synchronous terrestrial

  3. Evaluated and estimated solubility of some elements for performance assessment of geological disposal of high-level radioactive waste using updated version of thermodynamic database

    International Nuclear Information System (INIS)

    Kitamura, Akira; Doi, Reisuke; Yoshida, Yasushi

    2011-01-01

    Japan Atomic Energy Agency (JAEA) established the thermodynamic database (JAEA-TDB) for performance assessment of geological disposal of high-level radioactive waste (HLW) and TRU waste. Twenty-five elements which were important for the performance assessment of geological disposal were selected for the database. JAEA-TDB enhanced reliability of evaluation and estimation of their solubility through selecting the latest and the most reliable thermodynamic data at present. We evaluated and estimated solubility of the 25 elements in the simulated porewaters established in the 'Second Progress Report for Safety Assessment of Geological Disposal of HLW in Japan' using the JAEA-TDB and compared with those using the previous thermodynamic database (JNC-TDB). It was found that most of the evaluated and estimated solubility values were not changed drastically, but the solubility and speciation of dominant aqueous species for some elements using the JAEA-TDB were different from those using the JNC-TDB. We discussed about how to provide reliable solubility values for the performance assessment. (author)

  4. Contributions to Logical Database Design

    Directory of Open Access Journals (Sweden)

    Vitalie COTELEA

    2012-01-01

    Full Text Available This paper treats the problems arising at the stage of logical database design. It comprises a synthesis of the most common inference models of functional dependencies, deals with the problems of building covers for sets of functional dependencies, makes a synthesizes of normal forms, presents trends regarding normalization algorithms and provides a temporal complexity of those. In addition, it presents a summary of the most known keys’ search algorithms, deals with issues of analysis and testing of relational schemes. It also summarizes and compares the different features of recognition of acyclic database schemas.

  5. Development of IAEA nuclear reaction databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Zerkin, V.; Trkov, A. [International Atomic Energy Agency, Dept. of Nuclear Sciences and Applications, Vienna (Austria)

    2008-07-01

    From mid-2004 onwards, the major nuclear reaction databases (EXFOR, CINDA and Endf) and services (Web and CD-Roms retrieval systems and specialized applications) have been functioning within a modern computing environment as multi-platform software, working under several operating systems with relational databases. Subsequent work at the IAEA has focused on three areas of development: revision and extension of the contents of the databases; extension and improvement of the functionality and integrity of the retrieval systems; development of software for database maintenance and system deployment. (authors)

  6. Linking Multiple Databases: Term Project Using "Sentences" DBMS.

    Science.gov (United States)

    King, Ronald S.; Rainwater, Stephen B.

    This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…

  7. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    International Nuclear Information System (INIS)

    Shi Siqi; Zhao Yan; Wu Qu; Gao Jian; Liu Yue; Ju Wangwei; Ouyang Chuying; Xiao Ruijuan

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. (topical review)

  8. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    Science.gov (United States)

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  9. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej; Kuźnik, Krzysztof M.; Paszyński, Maciej R.; Calo, Victor M.; Pardo, D.

    2014-01-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  10. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  11. Multi-scale approach to the environmental factors effects on spatio-temporal variability of Chironomus salinarius (Diptera: Chironomidae) in a French coastal lagoon

    Science.gov (United States)

    Cartier, V.; Claret, C.; Garnier, R.; Fayolle, S.; Franquet, E.

    2010-03-01

    The complexity of the relationships between environmental factors and organisms can be revealed by sampling designs which consider the contribution to variability of different temporal and spatial scales, compared to total variability. From a management perspective, a multi-scale approach can lead to time-saving. Identifying environmental patterns that help maintain patchy distribution is fundamental in studying coastal lagoons, transition zones between continental and marine waters characterised by great environmental variability on spatial and temporal scales. They often present organic enrichment inducing decreased species richness and increased densities of opportunist species like C hironomus salinarius, a common species that tends to swarm and thus constitutes a nuisance for human populations. This species is dominant in the Bolmon lagoon, a French Mediterranean coastal lagoon under eutrophication. Our objective was to quantify variability due to both spatial and temporal scales and identify the contribution of different environmental factors to this variability. The population of C. salinarius was sampled from June 2007 to June 2008 every two months at 12 sites located in two areas of the Bolmon lagoon, at two different depths, with three sites per area-depth combination. Environmental factors (temperature, dissolved oxygen both in sediment and under water surface, sediment organic matter content and grain size) and microbial activities (i.e. hydrolase activities) were also considered as explanatory factors of chironomid densities and distribution. ANOVA analysis reveals significant spatial differences regarding the distribution of chironomid larvae for the area and the depth scales and their interaction. The spatial effect is also revealed for dissolved oxygen (water), salinity and fine particles (area scale), and for water column depth. All factors but water column depth show a temporal effect. Spearman's correlations highlight the seasonal effect

  12. Development of temperature profile sensor at high temporal and spatial resolution

    International Nuclear Information System (INIS)

    Takiguchi, Hiroki; Furuya, Masahiro; Arai, Takahiro

    2017-01-01

    In order to quantify thermo-physical flow field for the industrial applications such as nuclear and chemical reactors, high temporal and spatial measurements for temperature, pressure, phase velocity, viscosity and so on are required to validate computational fluid dynamics (CFD) and subchannel analyses. The paper proposes a novel temperature profile sensor, which can acquire temperature distribution in water at high temporal (a millisecond) and spatial (millimeter) resolutions. The devised sensor acquires electric conductance between transmitter and receiver wires, which is a function of temperature. The sensor comprise wire mesh structure for multipoint and simultaneous temperature measurement in water, which indicated that three-dimensional temperature distribution can be detected in flexible resolutions. For the demonstration of the principle, temperature profile in water was estimated according to pre-determined temperature calibration line against time-averaged impedance. The 16×16 grid sensor visualized fast and multi-dimensional mixing process of a hot water jet into a cold water pool. (author)

  13. ON TEMPORAL VARIATIONS OF THE MULTI-TeV COSMIC RAY ANISOTROPY USING THE TIBET III AIR SHOWER ARRAY

    International Nuclear Information System (INIS)

    Amenomori, M.; Bi, X. J.; Ding, L. K.; Fan, C.; Feng Zhaoyang; Gou, Q. B.; He, H. H.; Chen, D.; Cui, S. W.; Danzengluobu; Ding, X. H.; Guo, H. W.; Hu Haibing; Feng, C. F.; He, M.; Feng, Z. Y.; Gao, X. Y.; Geng, Q. X.; Hibino, K.; Hotta, N.

    2010-01-01

    We analyze the large-scale two-dimensional sidereal anisotropy of multi-TeV cosmic rays (CRs) by the Tibet Air Shower Array, with the data taken from 1999 November to 2008 December. To explore temporal variations of the anisotropy, the data set is divided into nine intervals, each with a time span of about one year. The sidereal anisotropy of magnitude, about 0.1%, appears fairly stable from year to year over the entire observation period of nine years. This indicates that the anisotropy of TeV Galactic CRs remains insensitive to solar activities since the observation period covers more than half of the 23rd solar cycle.

  14. Estimating spatio-temporal dynamics of stream total phosphate concentration by soft computing techniques.

    Science.gov (United States)

    Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan

    2016-08-15

    This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Modelling of oil spill frequency, leak sources and contamination probability in the Caspian Sea using multi-temporal SAR images 2006–2010 and stochastic modelling

    Directory of Open Access Journals (Sweden)

    Emil Bayramov

    2016-05-01

    Full Text Available The main goal of this research was to detect oil spills, to determine the oil spill frequencies and to approximate oil leak sources around the Oil Rocks Settlement, the Chilov and Pirallahi Islands in the Caspian Sea using 136 multi-temporal ENVISAT Advanced Synthetic Aperture Radar Wide Swath Medium Resolution images acquired during 2006–2010. The following oil spill frequencies were observed around the Oil Rocks Settlement, the Chilov and Pirallahi Islands: 2–10 (3471.04 sq km, 11–20 (971.66 sq km, 21–50 (692.44 sq km, 51–128 (191.38 sq km. The most critical oil leak sources with the frequency range of 41–128 were observed at the Oil Rocks Settlement. The exponential regression analysis between wind speeds and oil slick areas detected from 136 multi-temporal ENVISAT images revealed the regression coefficient equal to 63%. The regression model showed that larger oil spill areas were observed with decreasing wind speeds. The spatiotemporal patterns of currents in the Caspian Sea explained the multi-directional spatial distribution of oil spills around Oil Rocks Settlement, the Chilov and Pirallahi Islands. The linear regression analysis between detected oil spill frequencies and predicted oil contamination probability by the stochastic model showed the positive trend with the regression coefficient of 30%.

  16. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    Science.gov (United States)

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  17. Extending multi-tenant architectures: a database model for a multi-target support in SaaS applications

    Science.gov (United States)

    Rico, Antonio; Noguera, Manuel; Garrido, José Luis; Benghazi, Kawtar; Barjis, Joseph

    2016-05-01

    Multi-tenant architectures (MTAs) are considered a cornerstone in the success of Software as a Service as a new application distribution formula. Multi-tenancy allows multiple customers (i.e. tenants) to be consolidated into the same operational system. This way, tenants run and share the same application instance as well as costs, which are significantly reduced. Functional needs vary from one tenant to another; either companies from different sectors run different types of applications or, although deploying the same functionality, they do differ in the extent of their complexity. In any case, MTA leaves one major concern regarding the companies' data, their privacy and security, which requires special attention to the data layer. In this article, we propose an extended data model that enhances traditional MTAs in respect of this concern. This extension - called multi-target - allows MT applications to host, manage and serve multiple functionalities within the same multi-tenant (MT) environment. The practical deployment of this approach will allow SaaS vendors to target multiple markets or address different levels of functional complexity and yet commercialise just one single MT application. The applicability of the approach is demonstrated via a case study of a real multi-tenancy multi-target (MT2) implementation, called Globalgest.

  18. Building a Spatial Database for Romanian Archaeological Sites

    Directory of Open Access Journals (Sweden)

    Aura-Mihaela MOCANU

    2011-03-01

    Full Text Available Spatial databases are a new technology in the database systems which allow storing, retrieving and maintaining geospatial data. This paper describes the steps which we have followed to model, design and develop a spatial database for Romanian archaeological sites and their assemblies. The system analysis was made using the well known Entity-Relationship model; the system design included the conceptual, the external and the internal schemas design, and the system development meant developing the needed database objects and programs. The designed database allows users to load vector geospatial data about the archaeological sites in two distinct spatial reference systems WGS84 and STEREO70, temporal data about the historical periods and cultures, other descriptive data and documents as references to the archaeological objects.

  19. Multi-level analyses of spatial and temporal determinants for dengue infection

    Directory of Open Access Journals (Sweden)

    Oskam Linda

    2006-01-01

    Full Text Available Abstract Background Dengue is a mosquito-borne viral infection that is now endemic in most tropical countries. In Thailand, dengue fever/dengue hemorrhagic fever is a leading cause of hospitalization and death among children. A longitudinal study among 1750 people in two rural and one urban sites in northern Thailand from 2001 to 2003 studied spatial and temporal determinants for recent dengue infection at three levels (time, individual and household. Methods Determinants for dengue infection were measured by questionnaire, land-cover maps and GIS. IgM antibodies against dengue were detected by ELISA. Three-level multi-level analysis was used to study the risk determinants of recent dengue infection. Results Rates of recent dengue infection varied substantially in time from 4 to 30%, peaking in 2002. Determinants for recent dengue infection differed per site. Spatial clustering was observed, demonstrating variation in local infection patterns. Most of the variation in recent dengue infection was explained at the time-period level. Location of a person and the environment around the house (including irrigated fields and orchards were important determinants for recent dengue infection. Conclusion We showed the focal nature of asymptomatic dengue infections. The great variation of determinants for recent dengue infection in space and time should be taken into account when designing local dengue control programs.

  20. Object-Based Land Use Classification of Agricultural Land by Coupling Multi-Temporal Spectral Characteristics and Phenological Events in Germany

    Science.gov (United States)

    Knoefel, Patrick; Loew, Fabian; Conrad, Christopher

    2015-04-01

    Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty

  1. A multi-centre clinical follow-up database as a systematic approach to the evaluation of mid- and long-term health consequences in Chernobyl acute radiation syndrome patients

    International Nuclear Information System (INIS)

    Fischer, B.; Weiss, M.; Fliedner, T.M.; Belyi, D.A.; Kovalenko, A.N.; Bebeshko, V.G.; Nadejina, N.M.; Galstian, I.A.

    1996-01-01

    This paper describes scope, design and first results of a multi-centre follow-up database that has been established for the evaluation of mid- and long-term health consequences of acute radiation syndrome (ARS) survivors. After the Chernobyl accident on 26 April 1986, 237 cases with suspected acute radiation syndrome have been reported. For 134 of these cases the diagnosis of ARS was confirmed in a consensus conference three years after the accident. Nearly all survivors underwent regular follow-up examinations in two specialized centres in Kiev and in Moscow. In collaboration with these centres we established a multi-centre clinical follow-up database that records the results of the follow-up examinations in a standardized schema. This database is an integral part of a five step approach to patient evaluation and aims at a comprehensive base for scientific analysis of the mid- and long-term consequences of accidental ionizing radiation. It will allow for a dynamic view on the development of the health status of individuals and groups of patients as well as the identification of critical organ systems that need early support, and an improvement of acute and follow-up treatment protocols for radiation accident victims

  2. PENERAPAN ARSITEKTUR MULTI-TIER DENGAN DCOM DALAM SUATU SISTEM INFORMASI

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Information System implementation using two-tier architecture result lack in several critical issues : reuse component, scalability, maintenance, and data security. The multi-tiered client/server architecture provides a good resolution to solve these problems that using DCOM technology . The software is made by using Delphi 4 Client/Server Suite and Microsoft SQL Server V. 7.0 as a database server software. The multi-tiered application is partitioned into thirds. The first is client application which provides presentation services. The second is server application which provides application services, and the third is database server which provides database services. This multi-tiered application software can be made in two model. They are Client/Server Windows model and Client/Server Web model with ActiveX Form Technology. In this research is found that making multi-tiered architecture with using DCOM technology can provide many benefits such as, centralized application logic in middle-tier, make thin client application, distributed load of data process in several machines, increases security with the ability in hiding data, dan fast maintenance without installing database drivers in every client. Abstract in Bahasa Indonesia : Penerapan sistem informasi menggunakan two-tier architecture mempunyai banyak kelemahan : penggunaan kembali komponen, skalabilitas, perawatan, dan keamanan data. Multi-tier Client-Server architecture mempunyai kemampuan untuk memecahkan masalah ini dengan DCOM teknologi. Perangkat lunak ini dapat dibuat menggunakan Delphi 4 Client/Server Suite dan Microsoft SQL Server 7.0 sebagai perangkat lunak database. Aplikasi program multi-tier ini dibagi menjadi tiga partisi. Pertama adalah aplikasi client menyediakan presentasi servis, kedua aplikasi server menyediakan servis aplikasi, dan ketiga aplikasi database menyediakan database servis. Perangkat lunak aplikasi multi-tier ini dapat dibuat dalam dua model, yaitu client

  3. Multi-Temporal Land Cover Classification with Long Short-Term Memory Neural Networks

    Science.gov (United States)

    Rußwurm, M.; Körner, M.

    2017-05-01

    Land cover classification (LCC) is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN), with a classical non-temporal convolutional neural network (CNN) model and an additional support vector machine (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.

  4. MULTI-TEMPORAL LAND COVER CLASSIFICATION WITH LONG SHORT-TERM MEMORY NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    M. Rußwurm

    2017-05-01

    Full Text Available Land cover classification (LCC is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN, with a classical non-temporal convolutional neural network (CNN model and an additional support vector machine (SVM baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.

  5. A novel multi-wavelength procedure for blood pressure estimation using opto-physiological sensor at peripheral arteries and capillaries

    Science.gov (United States)

    Scardulla, Francesco; Hu, Sijung; D'Acquisto, Leonardo; Pasta, Salvatore; Barrett, Laura; Blanos, Panagiotis; Yan, Liangwen

    2018-02-01

    In this study, the Carelight multi-wavelength opto-electronic patch sensor (OEPS) was adopted to assess the effectiveness of a new approach for estimating the systolic blood pressure (SBP) through the changes in the morphology of the OEPS signal. Specifically, the SBP was estimated by changing the pressure exerted on an inflatable cuff placed around the left upper arm. Pressure acquisitions were performed both with gold standard (i.e. electronic sphygmomanometer), and Carelight sensor (experimental procedure), on subjects from a multiethnic cohort (aged 28 +/- 7). The OEPS sensor was applied together with a manual inflatable cuff, going slightly above the level of the SBP with increases of +10mmHg and subsequently deflated by 10mmHg until reaching full deflation. The OEPS signals were captured using four wavelength illumination sources (i.e., green 525 nm, orange 595 nm, red 650 nm and IR 870 nm) on three different measuring sites, namely forefinger, radial artery and wrist. The implemented algorithm provides information on the instant when the SBP was reached and the signal is lost since the vessel is completely blocked. Similarly, it detected the signal resumption when the external pressure dropped below the SBP. The findings demonstrated a good correlation between the variation of the pressure and the corresponding OEPS signal with the most accurate result achieved in the fingertip among all wavelengths, with a temporal identification error of 8.07 %. Further studies will improve the clinical relevance on a cohort of patients diagnosed with hyper- or hypotension, in order to develop a wearable blood-pressure device.

  6. Critical Care Health Informatics Collaborative (CCHIC): Data, tools and methods for reproducible research: A multi-centre UK intensive care database.

    Science.gov (United States)

    Harris, Steve; Shi, Sinan; Brealey, David; MacCallum, Niall S; Denaxas, Spiros; Perez-Suarez, David; Ercole, Ari; Watkinson, Peter; Jones, Andrew; Ashworth, Simon; Beale, Richard; Young, Duncan; Brett, Stephen; Singer, Mervyn

    2018-04-01

    To build and curate a linkable multi-centre database of high resolution longitudinal electronic health records (EHR) from adult Intensive Care Units (ICU). To develop a set of open-source tools to make these data 'research ready' while protecting patient's privacy with a particular focus on anonymisation. We developed a scalable EHR processing pipeline for extracting, linking, normalising and curating and anonymising EHR data. Patient and public involvement was sought from the outset, and approval to hold these data was granted by the NHS Health Research Authority's Confidentiality Advisory Group (CAG). The data are held in a certified Data Safe Haven. We followed sustainable software development principles throughout, and defined and populated a common data model that links to other clinical areas. Longitudinal EHR data were loaded into the CCHIC database from eleven adult ICUs at 5 UK teaching hospitals. From January 2014 to January 2017, this amounted to 21,930 and admissions (18,074 unique patients). Typical admissions have 70 data-items pertaining to admission and discharge, and a median of 1030 (IQR 481-2335) time-varying measures. Training datasets were made available through virtual machine images emulating the data processing environment. An open source R package, cleanEHR, was developed and released that transforms the data into a square table readily analysable by most statistical packages. A simple language agnostic configuration file will allow the user to select and clean variables, and impute missing data. An audit trail makes clear the provenance of the data at all times. Making health care data available for research is problematic. CCHIC is a unique multi-centre longitudinal and linkable resource that prioritises patient privacy through the highest standards of data security, but also provides tools to clean, organise, and anonymise the data. We believe the development of such tools are essential if we are to meet the twin requirements of

  7. Computer-aided diagnosis workstation and database system for chest diagnosis based on multi-helical CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2006-03-01

    Multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information.

  8. Studies on the power output of a MADEAE-30 operating on complex terrain. Annual Energy Production estimation and Multivariable analysis. A case of multi-stall effect

    International Nuclear Information System (INIS)

    Cuerva, A.

    1996-01-01

    The main need of the EWTS-II Sub-project IV group is to have a suitable data-base which allows it to reach proper conclusions on the characteristics of power performance of wind turbines in complex terrain. With this aim, this document presents an analysis on the power output of the MADE AE-30 Wind turbine operating at Tarifa (also data from flat terrain are enclosed as a reference). An application of the bin method and AEP estimation for energy production method. In the two last issues a directional analysis and an study for two different turbulence intensity ranges are enclosed. Finally the STEPWISE multirregression method is applied on the measurements to identify the stored parameters that have influence on the power output. A brief description of multi stall effect is enclosed. (Author) 7 refs

  9. Enhance the accuracy of radar snowfall estimation with Multi new Z-S relationships in MRMS system

    Science.gov (United States)

    Qi, Y.

    2017-12-01

    Snow may have negative affects on roadways and human lives, but the result of the melted snow/ice is good for farm, humans, and animals. For example, in the Southwest and West mountainous area of United States, water shortage is a very big concern. However, snowfall in the winter can provide humans, animals and crops an almost unlimited water supply. So, using radar to accurately estimate the snowfall is very important for human life and economic development in the water lacking area. The current study plans to analyze the characteristics of the horizontal and vertical variations of dry/wet snow using dual polarimetric radar observations, relative humidity and in situ snow water equivalent observations from the National Weather Service All Weather Prediction Accumulation Gauges (AWPAG) across the CONUS, and establish the relationships between the reflectivity (Z) and ground snow water equivalent (S). The new Z-S relationships will be evaluated with independent CoCoRaHS (Community Collaborative Rain, Hail & Snow Network) gauge observations and eventually implemented in the Multi-Radar Multi-Sensor system for improved quantitative precipitation estimation for snow. This study will analyze the characteristics of the horizontal and vertical variations of dry/wet snow using dual polarimetric radar observations, relative humidity and in situ snow water equivalent observations from the National Weather Service All Weather Prediction Accumulation Gauges (AWPAG) across the CONUS, and establish the relationships between the reflectivity (Z) and ground snow water equivalent (S). The new Z-S relationships will be used to reduce the error of snowfall estimation in Multi Radar and Multi Sensors (MRMS) system, and tested in MRMS system and evaluated with the COCORaHS observations. Finally, it will be ingested in MRMS sytem, and running in NWS/NCAR operationally

  10. GENIUS: A tool for multi-disciplinary and multi-scalar databases

    Science.gov (United States)

    Bonhomme, M.; Masson, V.; Adolphe, L.; Faraut, S.

    2013-12-01

    Cities are responsible for the majority of energy consumption on the planet. As a consequence, researches regarding energy use in urban context have been increasing for the last decades. Recently the interrelationship between city, energy consumption and urban microclimate appeared as a key component of urban sustainability. To be accurate, those studies must take into account a multidisciplinary urban context and modelling tools need high definition data. Nevertheless, at the city scale, input data is either imprecise or only available for small areas. In particular, there is a lack of information about buildings footprints, roofs sloping, envelope materials, etc. Moreover, the existing data do not allow researchers to explore prospective issues such as climate change or future urban development. In this sense, we developed a new tool called GENIUS (GENerator of Interactive Urban blockS) to build high definition and evolutionary maps from available databases. GENIUS creates maps composed of archetypical neighbourhood coming as shape-files of polygons with additional information (height, age, use, thermal insulation, etc.). Those archetypical neighbourhoods come to seven types of urban blocks that can be found in most European cities. Those types can be compared with Stewart and Oke Local Climate Zones (LCZ). The first step of our method is to transform an existing map into an 'archetypical map'. To do this, the urban database of the IGN (French Geographical Institute) was used. The maps were divided into cells of 250 meters resolution. For each cell, about 40 morphological indicators were calculated. Seven groups of blocks were then identified by means of Principal Component Analysis. GENIUS databases are also able to evolve through time. As a matter of fact, the initial map is transformed, year after year, by taking into account changes in density and urban history. In that sense, GENIUS communicates with NEDUM, a model developed by the CIRED (International

  11. ThermoData Engine: Extension to Solvent Design and Multi-component Process Stream Property Calculations with Uncertainty Analysis

    DEFF Research Database (Denmark)

    Diky, Vladimir; Chirico, Robert D.; Muzny, Chris

    ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured...... property values and expert system for data analysis and generation of recommended property values at the specified conditions along with uncertainties on demand. The most recent extension of TDE covers solvent design and multi-component process stream property calculations with uncertainty analysis...... variations). Predictions can be compared to the available experimental data, and uncertainties are estimated for all efficiency criteria. Calculations of the properties of multi-component streams including composition at phase equilibria (flash calculations) are at the heart of process simulation engines...

  12. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases

    Science.gov (United States)

    Nixon, Mark S.; Komogortsev, Oleg V.

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030

  13. SPATIO-TEMPORAL ESTIMATION OF INTEGRATED WATER VAPOUR OVER THE MALAYSIAN PENINSULA DURING MONSOON SEASON

    Directory of Open Access Journals (Sweden)

    S. Salihin

    2017-10-01

    Full Text Available This paper provides the precise information on spatial-temporal distribution of water vapour that was retrieved from Zenith Path Delay (ZPD which was estimated by Global Positioning System (GPS processing over the Malaysian Peninsular. A time series analysis of these ZPD and Integrated Water Vapor (IWV values was done to capture the characteristic on their seasonal variation during monsoon seasons. This study was found that the pattern and distribution of atmospheric water vapour over Malaysian Peninsular in whole four years periods were influenced by two inter-monsoon and two monsoon seasons which are First Inter-monsoon, Second Inter-monsoon, Southwest monsoon and Northeast monsoon.

  14. Database of diazotrophs in global ocean: abundance, biomass and nitrogen fixation rates

    Directory of Open Access Journals (Sweden)

    Y.-W. Luo

    2012-08-01

    > fixation has underestimated the true rates. As a result, one can expect that future rate measurements will shift the mean N2 fixation rate upward and may result in significantly higher estimates for the global N2 fixation. The evolving database can nevertheless be used to study spatial and temporal distributions and variations of marine N2 fixation, to validate geochemical estimates and to parameterize and validate biogeochemical models, keeping in mind that future rate measurements may rise in the future. The database is stored in PANGAEA (doi:10.1594/PANGAEA.774851.

  15. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    Science.gov (United States)

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  16. A Framework for Multi-Robot Motion Planning from Temporal Logic Specifications

    DEFF Research Database (Denmark)

    Koo, T. John; Li, Rongqing; Quottrup, Michael Melholt

    2012-01-01

    -time Temporal Logic, Computation Tree Logic, and -calculus can be preserved. Motion planning can then be performed at a discrete level by considering the parallel composition of discrete abstractions of the robots with a requirement specification given in a suitable temporal logic. The bisimilarity ensures...

  17. Framework for Optimizing Selection of Interspecies Correlation Estimation Models to Address Species Diversity and Toxicity Gaps in an Aquatic Database

    Science.gov (United States)

    The Chemical Aquatic Fate and Effects (CAFE) database is a tool that facilitates assessments of accidental chemical releases into aquatic environments. CAFE contains aquatic toxicity data used in the development of species sensitivity distributions (SSDs) and the estimation of ha...

  18. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    Science.gov (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  19. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    Science.gov (United States)

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  20. LAND-deFeND - An innovative database structure for landslides and floods and their consequences.

    Science.gov (United States)

    Napolitano, Elisabetta; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Bianchi, Cinzia; Guzzetti, Fausto

    2018-02-01

    Information on historical landslides and floods - collectively called "geo-hydrological hazards - is key to understand the complex dynamics of the events, to estimate the temporal and spatial frequency of damaging events, and to quantify their impact. A number of databases on geo-hydrological hazards and their consequences have been developed worldwide at different geographical and temporal scales. Of the few available database structures that can handle information on both landslides and floods some are outdated and others were not designed to store, organize, and manage information on single phenomena or on the type and monetary value of the damages and the remediation actions. Here, we present the LANDslides and Floods National Database (LAND-deFeND), a new database structure able to store, organize, and manage in a single digital structure spatial information collected from various sources with different accuracy. In designing LAND-deFeND, we defined four groups of entities, namely: nature-related, human-related, geospatial-related, and information-source-related entities that collectively can describe fully the geo-hydrological hazards and their consequences. In LAND-deFeND, the main entities are the nature-related entities, encompassing: (i) the "phenomenon", a single landslide or local inundation, (ii) the "event", which represent the ensemble of the inundations and/or landslides occurred in a conventional geographical area in a limited period, and (iii) the "trigger", which is the meteo-climatic or seismic cause (trigger) of the geo-hydrological hazards. LAND-deFeND maintains the relations between the nature-related entities and the human-related entities even where the information is missing partially. The physical model of the LAND-deFeND contains 32 tables, including nine input tables, 21 dictionary tables, and two association tables, and ten views, including specific views that make the database structure compliant with the EC INSPIRE and the Floods

  1. Multi-locus estimates of population structure and migration in a fence lizard hybrid zone.

    Directory of Open Access Journals (Sweden)

    Adam D Leaché

    Full Text Available A hybrid zone between two species of lizards in the genus Sceloporus (S. cowlesi and S. tristichus on the Mogollon Rim in Arizona provides a unique opportunity to study the processes of lineage divergence and merging. This hybrid zone involves complex interactions between 2 morphologically and ecologically divergent subspecies, 3 chromosomal groups, and 4 mitochondrial DNA (mtDNA clades. The spatial patterns of divergence between morphology, chromosomes and mtDNA are discordant, and determining which of these character types (if any reflects the underlying population-level lineages that are of interest has remained impeded by character conflict. The focus of this study is to estimate the number of populations interacting in the hybrid zone using multi-locus nuclear data, and to then estimate the migration rates and divergence time between the inferred populations. Multi-locus estimates of population structure and gene flow were obtained from 12 anonymous nuclear loci sequenced for 93 specimens of Sceloporus. Population structure estimates support two populations, and this result is robust to changes to the prior probability distribution used in the Bayesian analysis and the use of spatially-explicit or non-spatial models. A coalescent analysis of population divergence suggests that gene flow is high between the two populations, and that the timing of divergence is restricted to the Pleistocene. The hybrid zone is more accurately described as involving two populations belonging to S. tristichus, and the presence of S. cowlesi mtDNA haplotypes in the hybrid zone is an anomaly resulting from mitochondrial introgression.

  2. Implementation of Collate at the database level for PostgreSQL

    OpenAIRE

    Strnad, Radek

    2009-01-01

    Current version of PostgreSQL supports only one collation per database cluster. This does not meet the requirements of some users developing multi-lingual applications. The goal of the work will be to implement collation at database level and make foundations for further national language supp ort development. User will be able to set collation when creating a database. Particulary commands CREATE DATABASE... COLLATE ... will be implemented using ANSI standards. Work will also implement possi...

  3. Hybrid quadrupole-orbitrap mass spectrometry analysis with accurate-mass database and parallel reaction monitoring for high-throughput screening and quantification of multi-xenobiotics in honey.

    Science.gov (United States)

    Li, Yi; Zhang, Jinzhen; Jin, Yue; Wang, Lin; Zhao, Wen; Zhang, Wenwen; Zhai, Lifei; Zhang, Yaping; Zhang, Yongxin; Zhou, Jinhui

    2016-01-15

    This study reports a rapid, automated screening and quantification method for the determination of multi-xenobiotic residues in honey using ultra-high performance liquid chromatography-hybrid quadrupole-Orbitrap mass spectrometry (UHPLC-Q-Orbitrap) with a user-built accurate-mass database plus parallel reaction monitoring (PRM). The database contains multi-xenobiotic information including formulas, adduct types, theoretical exact mass and retention time, characteristic fragment ions, ion ratios, and mass accuracies. A simple sample preparation method was developed to reduce xenobiotic loss in the honey samples. The screening method was validated based on retention time deviation, mass accuracy via full scan-data-dependent MS/MS (full scan-ddMS2), multi-isotope ratio, characteristic ion ratio, sensitivity, and positive/negative switching performance between the spiked sample and corresponding standard solution. The quantification method based on the PRM mode is a promising new quantitative tool which we validated in terms of selectivity, linearity, recovery (accuracy), repeatability (precision), decision limit (CCα), detection capability (CCβ), matrix effects, and carry-over. The optimized methods proposed in this study enable the automated screening and quantification of 157 compounds in less than 15 min in honey. The results of this study, as they represent a convenient protocol for large-scale screening and quantification, also provide a research approach for analysis of various contaminants in other matrices. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  5. Multi-Hazard Analysis for the Estimation of Ground Motion Induced by Landslides and Tectonics

    Science.gov (United States)

    Iglesias, Rubén; Koudogbo, Fifame; Ardizzone, Francesca; Mondini, Alessandro; Bignami, Christian

    2016-04-01

    Space-borne synthetic aperture radar (SAR) sensors allow obtaining all-day all-weather terrain complex reflectivity images which can be processed by means of Persistent Scatterer Interferometry (PSI) for the monitoring of displacement episodes with extremely high accuracy. In the work presented, different PSI strategies to measure ground surface displacements for multi-scale multi-hazard mapping are proposed in the context of landslides and tectonic applications. This work is developed in the framework of ESA General Studies Programme (GSP). The present project, called Multi Scale and Multi Hazard Mapping Space based Solutions (MEMpHIS), investigates new Earth Observation (EO) methods and new Information and Communications Technology (ICT) solutions to improve the understanding and management of disasters, with special focus on Disaster Risk Reduction rather than Rapid Mapping. In this paper, the results of the investigation on the key processing steps for measuring large-scale ground surface displacements (like the ones originated by plate tectonics or active faults) as well as local displacements at high resolution (like the ones related with active slopes) will be presented. The core of the proposed approaches is based on the Stable Point Network (SPN) algorithm, which is the advanced PSI processing chain developed by ALTAMIRA INFORMATION. Regarding tectonic applications, the accurate displacement estimation over large-scale areas characterized by low magnitude motion gradients (3-5 mm/year), such as the ones induced by inter-seismic or Earth tidal effects, still remains an open issue. In this context, a low-resolution approach based in the integration of differential phase increments of velocity and topographic error (obtained through the fitting of a linear model adjustment function to data) will be evaluated. Data from the default mode of Sentinel-1, the Interferometric Wide Swath Mode, will be considered for this application. Regarding landslides

  6. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    Directory of Open Access Journals (Sweden)

    Yuhan Rao

    2015-06-01

    Full Text Available Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM, is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02% and computation efficiency of NDVI-LMGM (31 seconds using a personal computer. Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM, Enhanced STARFM (ESTARFM and Weighted Linear Model (WLM show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.

  7. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    Science.gov (United States)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  8. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  9. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  10. Advancing approaches for multi-year high-frequency monitoring of temporal and spatial variability in carbon cycle fluxes and drivers in freshwater lakes

    Science.gov (United States)

    Desai, A. R.; Reed, D. E.; Dugan, H. A.; Loken, L. C.; Schramm, P.; Golub, M.; Huerd, H.; Baldocchi, A. K.; Roberts, R.; Taebel, Z.; Hart, J.; Hanson, P. C.; Stanley, E. H.; Cartwright, E.

    2017-12-01

    Freshwater ecosystems are hotspots of regional to global carbon cycling. However, significant sample biases limit our ability to quantify and predict these fluxes. For lakes, scaled flux estimates suffer biased sampling toward 1) low-nutrient pristine lakes, 2) infrequent temporal sampling, 3) field campaigns limited to the growing season, and 4) replicates limited to near the center of the lake. While these biases partly reflect the realities of ecological sampling, there is a need to extend observations towards the large fraction of freshwater systems worldwide that are impaired by human activities and those facing significant interannual variability owing to climatic change. Also, for seasonally ice-covered lakes, much of the annual budget of carbon fluxes is thought to be explained by variation in the shoulder seasons of spring ice melt and fall turnover. Recent advances in automated, continuous multi-year temporal sampling coupled with rapid methods for spatial mapping of CO2 fluxes has strong potential to rectify these sampling biases. Here, we demonstrate these advances in an eutrophic seasonally-ice covered lake with an urban shoreline and agricultural watershed. Multiple years of half-hourly eddy covariance flux tower observations from two locations are coupled with frequent spatial samples of these fluxes and drivers by speedboat, floating chamber fluxes, automated buoy-based monitoring of lake nutrient and physical profiles, and ensemble of physical-ecosystem models. High primary productivity in the water column leads to an average net carbon sink during the growing season in much of the lake, but annual net carbon fluxes show the lake can act as an annual source or a sink of carbon depending the timing of spring and fall turnover. Trophic interactions and internal waves drive shorter-term variation while nutrients and biology drive seasonal variation. However, discrepancies remain among methods to quantify fluxes, requiring further investigation.

  11. The new MQ/AAO/Strasbourg multi-wavelength and spectroscopic PNe database: MASPN

    Science.gov (United States)

    Parker, Quentin Andrew; Bojicic, Ivan; Frew, David; Acker, Agnes; Ochsenbein, Francois; MASPN Database Team

    2015-01-01

    We are in a new golden age of PN discovery. This is thanks in particular to high sensitivity, wide-field, narrow-band surveys of the Galactic plane undertaken on the UKST in Australia and the Isaac Newton telescope on La Palma. Together these telescopes and their H-alpha surveys have provided very significant Planetary Nebulae (PNe) discoveries that have more than doubled the totals accrued by all telescopes over the previous 250 years. However, these PNe are not simply more of the same found in previous catalogues. Most new PNe are more obscured, evolved and of lower surface brightness than previous compilations while others are faint but compact and more distant. This has required an extensive and time-consuming programme of spectroscopic confirmation on a variety of 2m and 4m telescopes that is now largely complete. The scope of any future large-scale PNe studies, particularly those of a statistical nature or undertaken to understand true PNe diversity and evolution should now reflect this fresh PN population landscape of the combined sample of ~3500 Galactic PNe now available. Such studies should be coloured and nuanced by these recent major discoveries and the massive, high sensitivity, high resolution, multi-wavelength imaging surveys now available across much of the electromagnetic spectrum.Following this motivation we provide, for the first time, an accessible, reliable, on-line "one-stop" SQL database for essential, up-to date information for all known Galactic PN. We have attempted to: i) Reliably remove the many PN mimics/false ID's that have biased previous compilations and subsequent studies; ii) Provide accurate, updated positions, sizes, morphologies, radial velocities, fluxes, multi-wavelength imagery and spectroscopy; iii) Link to CDS/Vizier and hence provide archival history for each object; iv) Provide an interface to sift, select, browse, collate, investigate, download and visualise the complete currently known Galactic PNe diaspora and v

  12. Temporal and spectral manipulations of correlated photons using a time-lens

    OpenAIRE

    Mittal, Sunil; Orre, Venkata Vikram; Restelli, Alessandro; Salem, Reza; Goldschmidt, Elizabeth A.; Hafezi, Mohammad

    2017-01-01

    A common challenge in quantum information processing with photons is the limited ability to manipulate and measure correlated states. An example is the inability to measure picosecond scale temporal correlations of a multi-photon state, given state-of-the-art detectors have a temporal resolution of about 100 ps. Here, we demonstrate temporal magnification of time-bin entangled two-photon states using a time-lens, and measure their temporal correlation function which is otherwise not accessibl...

  13. Studies on the power output of a MADE AE-30 operating on complex terrain. Annual Energy Production estimation and Multivariable analysis. A case of multi-stall effect

    Energy Technology Data Exchange (ETDEWEB)

    Cuerva, A.

    1996-12-01

    The main need of the EWTS-II Sub-project IV group is to have a suitable data-base which allows it to reach proper conclusions on the characteristics of power performance of wind turbines in complex terrain. With this aim, this document presents an analysis on the power output of the MADE AE-30 Wind turbine operating at Tarifa (also data from flat terrain are enclosed as a reference). An application of the bin method and AEP estimation for energy production method, in the two last issues a directional analysis and an study for two different turbulence intensity ranges are enclosed. Finally the Stepwise multirregression method is applied on the measurements to identify the stored parameters that have influence on the power output. A brief description of multi stall effect is enclosed. (Author)

  14. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  15. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  16. Database activities at Brookhaven National Laboratory

    International Nuclear Information System (INIS)

    Trahern, C.G.

    1995-01-01

    Brookhaven National Laboratory is a multi-disciplinary lab in the DOE system of research laboratories. Database activities are correspondingly diverse within the restrictions imposed by the dominant relational database paradigm. The authors discuss related activities and tools used in RHIC and in the other major projects at BNL. The others are the Protein Data Bank being maintained by the Chemistry department, and a Geographical Information System (GIS)--a Superfund sponsored environmental monitoring project under development in the Office of Environmental Restoration

  17. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  18. Seasonal and temporal variation in release of antibiotics in hospital wastewater: estimation using continuous and grab sampling.

    Science.gov (United States)

    Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J

    2013-01-01

    The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the

  19. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    Science.gov (United States)

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  20. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    Science.gov (United States)

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  2. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  3. DEMO maintenance scenarios: scheme for time estimations and preliminary estimates for blankets arranged in multi-module-segments

    International Nuclear Information System (INIS)

    Nagy, D.

    2007-01-01

    Previous conceptual studies made clear that the ITER blanket concept and segmentation is not suitable for the environment of a potential fusion power plant (DEMO). One promising concept to be used instead is the so-called Multi-Module-Segment (MMS) concept. Each MMS consists of a number of blankets arranged on a strong back plate thus forming ''banana'' shaped in-board (IB) and out-board (OB) segments. With respect to port size, weight, or other limiting aspects the IB and OB MMS are segmented in toroidal direction. The number of segments to be replaced would be below 100. For this segmentation concept a new maintenance scenario had to be worked out. The aim of this paper is to present a promising MMS maintenance scenario, a flexible scheme for time estimations under varying boundary conditions and preliminary time estimates. According to the proposed scenario two upper, vertical arranged maintenance ports have to be opened for blanket maintenance on opposite sides of the tokamak. Both ports are central to a 180 degree sector and the MMS are removed and inserted through both ports. In-vessel machines are operating to transport the elements in toroidal direction and also to insert and attach the MMS to the shield. Outside the vessel the elements have to be transported between the tokamak and the hot cell to be refurbished. Calculating the maintenance time for such a scenario is rather challenging due to the numerous parallel processes involved. For this reason a flexible, multi-level calculation scheme has been developed in which the operations are organized into three levels: At the lowest level the basic maintenance steps are determined. These are organized into maintenance sequences that take into account parallelisms in the system. Several maintenance sequences constitute the maintenance phases which correspond to a certain logistics scenario. By adding the required times of the maintenance phases the total maintenance time is obtained. The paper presents

  4. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Introduction: Web-based planetary image dissemination platforms usually show outline coverages of the data and offer querying for metadata as well as preview and download, e.g. the HRSC Mapserver (Walter & van Gasselt, 2014). Here we introduce a new approach for a system dedicated to change detection by simultanous visualisation of single-image time series in a multi-temporal context. While the usual form of presenting multi-orbit datasets is the merge of the data into a larger mosaic, we want to stay with the single image as an important snapshot of the planetary surface at a specific time. In the context of the EU FP-7 iMars project we process and ingest vast amounts of automatically co-registered (ACRO) images. The base of the co-registration are the high precision HRSC multi-orbit quadrangle image mosaics, which are based on bundle-block-adjusted multi-orbit HRSC DTMs. Additionally we make use of the existing bundle-adjusted HRSC single images available at the PDS archives. A prototype demonstrating the presented features is available at http://imars.planet.fu-berlin.de. Multi-temporal database: In order to locate multiple coverage of images and select images based on spatio-temporal queries, we converge available coverage catalogs for various NASA imaging missions into a relational database management system with geometry support. We harvest available metadata entries during our processing pipeline using the Integrated Software for Imagers and Spectrometers (ISIS) software. Currently, this database contains image outlines from the MGS/MOC, MRO/CTX and the MO/THEMIS instruments with imaging dates ranging from 1996 to the present. For the MEx/HRSC data, we already maintain a database which we automatically update with custom software based on the VICAR environment. Web Map Service with time support: The MapServer software is connected to the database and provides Web Map Services (WMS) with time support based on the START_TIME image attribute. It allows temporal

  5. Multi-temporal InSAR Datastacks for Surface Deformation Monitoring: a Review

    Science.gov (United States)

    Ferretti, A.; Novali, F.; Prati, C.; Rocca, F.

    2009-04-01

    In the last decade extensive processing of thousands of satellite radar scenes acquired by different sensors (e.g. ERS-1/2, ENVISAT and RADARSAT) has demonstrated how multi-temporal data-sets can be successfully exploited for surface deformation monitoring, by identifying objects on the terrain that have a stable, point-like behaviour. These objects, referred to as Permanent or Persistent Scatterers (PS), can be geo-coded and monitored for movement very accurately, acting as a "natural" geodetic network, integrating successfully continuous GPS data. After a brief analysis of both advantages and drawbacks of InSAR datastacks, the paper presents examples of applications of PS measurements for detecting and monitoring active faults, aquifers and oil/gas reservoirs, using experience in Europe, North America and Japan, and concludes with a discussion on future directions for PSInSAR analysis. Special attention is paid to the possibility of creating deformation maps over wide areas using historical archives of data already available. This second part of the paper will briefly discuss the technical features of the new radar sensors recently launched (namely: TerraSAR-X, RADARSAT-2, and CosmoSkyMed) and their impact on space geodesy, highlighting the importance of data continuity and standardized acquisition policies for almost all InSAR and PSInSAR applications. Finally, recent advances in the algorithms applied in PS analysis, such as detection of "temporary PS", PS characterization and exploitation of distributed scatterers, will be briefly discussed based on the processing of real data.

  6. Long-term citizen-collected data reveal geographical patterns and temporal trends in lake water clarity

    Science.gov (United States)

    Lottig, Noah R.; Wagner, Tyler; Henry, Emily N.; Cheruvelil, Kendra Spence; Webster, Katherine E.; Downing, John A.; Stow, Craig A.

    2014-01-01

    We compiled a lake-water clarity database using publically available, citizen volunteer observations made between 1938 and 2012 across eight states in the Upper Midwest, USA. Our objectives were to determine (1) whether temporal trends in lake-water clarity existed across this large geographic area and (2) whether trends were related to the lake-specific characteristics of latitude, lake size, or time period the lake was monitored. Our database consisted of >140,000 individual Secchi observations from 3,251 lakes that we summarized per lake-year, resulting in 21,020 summer averages. Using Bayesian hierarchical modeling, we found approximately a 1% per year increase in water clarity (quantified as Secchi depth) for the entire population of lakes. On an individual lake basis, 7% of lakes showed increased water clarity and 4% showed decreased clarity. Trend direction and strength were related to latitude and median sample date. Lakes in the southern part of our study-region had lower average annual summer water clarity, more negative long-term trends, and greater inter-annual variability in water clarity compared to northern lakes. Increasing trends were strongest for lakes with median sample dates earlier in the period of record (1938–2012). Our ability to identify specific mechanisms for these trends is currently hampered by the lack of a large, multi-thematic database of variables that drive water clarity (e.g., climate, land use/cover). Our results demonstrate, however, that citizen science can provide the critical monitoring data needed to address environmental questions at large spatial and long temporal scales. Collaborations among citizens, research scientists, and government agencies may be important for developing the data sources and analytical tools necessary to move toward an understanding of the factors influencing macro-scale patterns such as those shown here for lake water clarity.

  7. Distortions in Distributions of Impact Estimates in Multi-Site Trials: The Central Limit Theorem Is Not Your Friend

    Science.gov (United States)

    May, Henry

    2014-01-01

    Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…

  8. Operational validation of a multi-period and multi-criteria model conditioning approach for the prediction of rainfall-runoff processes in small forest catchments

    Science.gov (United States)

    Choi, H.; Kim, S.

    2012-12-01

    Most of hydrologic models have generally been used to describe and represent the spatio-temporal variability of hydrological processes in the watershed scale. Though it is an obvious fact that hydrological responses have the time varying nature, optimal values of model parameters were normally considered as time invariants or constants in most cases. The recent paper of Choi and Beven (2007) presents a multi-period and multi-criteria model conditioning approach. The approach is based on the equifinality thesis within the Generalised Likelihood Uncertainty Estimation (GLUE) framework. In their application, the behavioural TOPMODEL parameter sets are determined by several performance measures for global (annual) and short (30-days) periods, clustered using a Fuzzy C-means algorithm, into 15 types representing different hydrological conditions. Their study shows a good performance on the calibration of a rainfall-runoff model in a forest catchment, and also gives strong indications that it is uncommon to find model realizations that were behavioural over all multi-periods and all performance measures, and multi-period model conditioning approach may become new effective tool for predictions of hydrological processes in ungauged catchments. This study is a follow-up study on the Choi and Beven's (2007) model conditioning approach to test how the approach is effective for the prediction of rainfall-runoff responses in ungauged catchments. To achieve this purpose, 6 small forest catchments are selected among the several hydrological experimental catchments operated by Korea Forest Research Institute. In each catchment, long-term hydrological time series data varying from 10 to 30 years were available. The areas of the selected catchments range from 13.6 to 37.8 ha, and all areas are covered by coniferous or broad-leaves forests. The selected catchments locate in the southern coastal area to the northern part of South Korea. The bed rocks are Granite gneiss, Granite or

  9. Small temporal pole encephalocele: A hidden cause of "normal" MRI temporal lobe epilepsy.

    Science.gov (United States)

    Toledano, Rafael; Jiménez-Huete, Adolfo; Campo, Pablo; Poch, Claudia; García-Morales, Irene; Gómez Angulo, Juan Carlos; Coras, Roland; Blümcke, Ingmar; Álvarez-Linera, Juan; Gil-Nagel, Antonio

    2016-05-01

    Small temporal pole encephalocele (STPE) can be the pathologic substrate of epilepsy in a subgroup of patients with noninformative magnetic resonance imaging (MRI). Herein, we analyzed the clinical, neurophysiologic, and radiologic features of the epilepsy found in 22 patients with STPE, and the frequency of STPE in patients with refractory focal epilepsy (RFE). We performed an observational study of all patients with STPE identified at our epilepsy unit from January 2007 to December 2014. Cases were detected through a systematic search of our database of RFE patients evaluated for surgery, and a prospective collection of patients identified at the outpatient clinic. The RFE database was also employed to analyze the frequency of STPE among the different clinical subgroups. We identified 22 patients with STPE (11 women), including 12 (4.0%) of 303 patients from the RFE database, and 10 from the outpatient clinic. The median age was 51.5 years (range 29-75) and the median age at seizure onset was 38.5 years (range 15-73). Typically, 12 (80%) of 15 patients with left STPE reported seizures with impairment of language. Among the RFE cases, STPE were found in 9.6% of patients with temporal lobe epilepsy (TLE), and in 0.5% of those with extra-TLE (p = 0.0001). STPEs were more frequent in TLE patients with an initial MRI study reported as normal (23.3%) than in those with MRI-visible lesions (1.4%; p = 0.0002). Stereo-electroencephalography was performed in four patients, confirming the localization of the epileptogenic zone at the temporal pole with late participation of the hippocampus. Long-term seizure control was achieved in four of five operated patients. STPE can be a hidden cause of TLE in a subgroup of patients with an initial report of "normal" MRI. Early identification of this lesion may help to select patients for presurgical evaluation and tailored resection. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  10. Multi-Point Measurements to Characterize Radiation Belt Electron Precipitation Loss

    Science.gov (United States)

    Blum, L. W.

    2017-12-01

    Multipoint measurements in the inner magnetosphere allow the spatial and temporal evolution of various particle populations and wave modes to be disentangled. To better characterize and quantify radiation belt precipitation loss, we utilize multi-point measurements both to study precipitating electrons directly as well as the potential drivers of this loss process. Magnetically conjugate CubeSat and balloon measurements are combined to estimate of the temporal and spatial characteristics of dusk-side precipitation features and quantify loss due to these events. To then understand the drivers of precipitation events, and what determines their spatial structure, we utilize measurements from the dual Van Allen Probes to estimate spatial and temporal scales of various wave modes in the inner magnetosphere, and compare these to precipitation characteristics. The structure, timing, and spatial extent of waves are compared to those of MeV electron precipitation during a few individual events to determine when and where EMIC waves cause radiation belt electron precipitation. Magnetically conjugate measurements provide observational support of the theoretical picture of duskside interaction of EMIC waves and MeV electrons leading to radiation belt loss. Finally, understanding the drivers controlling the spatial scales of wave activity in the inner magnetosphere is critical for uncovering the underlying physics behind the wave generation as well as for better predicting where and when waves will be present. Again using multipoint measurements from the Van Allen Probes, we estimate the spatial and temporal extents and evolution of plasma structures and their gradients in the inner magnetosphere, to better understand the drivers of magnetospheric wave characteristic scales. In particular, we focus on EMIC waves and the plasma parameters important for their growth, namely cold plasma density and cool and warm ion density, anisotropy, and composition.

  11. BAPA Database: Linking landslide occurrence with rainfall in Asturias (Spain)

    Science.gov (United States)

    Valenzuela, Pablo; José Domínguez-Cuesta, María; Jiménez-Sánchez, Montserrat

    2015-04-01

    Asturias is a region in northern Spain with a temperate and humid climate. In this region, slope instability processes are very common and often cause economic losses and, sometimes, human victims. To prevent the geological risk involved, it is of great interest to predict landslide spatial and temporal occurrence. Some previous investigations have shown the importance of rainfall as a trigger factor. Despite the high incidence of these phenomena in Asturias, there are no databases of recent and actual landslides. The BAPA Project (Base de Datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database) aims to create an inventory of slope instabilities which have occurred between 1980 and 2015. The final goal is to study in detail the relationship between rainfall and slope instabilities in Asturias, establishing precipitation thresholds and soil moisture conditions necessary to instability triggering. This work presents the database progress showing its structure divided into various fields that essentially contain information related to spatial, temporal, geomorphological and damage data.

  12. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  13. Assessment and application of national environmental databases and mapping tools at the local level to two community case studies.

    Science.gov (United States)

    Hammond, Davyda; Conlon, Kathryn; Barzyk, Timothy; Chahine, Teresa; Zartarian, Valerie; Schultz, Brad

    2011-03-01

    Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental-pollution-related risks. Databases and mapping tools that supply community-level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community-level GIS mapping application. Queried data from the U.S. EPA's National-Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model-simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization. © 2010 Society for Risk Analysis.

  14. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  15. Effects of Temporal and Interspecific Variation of Specific Leaf Area on Leaf Area Index Estimation of Temperate Broadleaved Forests in Korea

    Directory of Open Access Journals (Sweden)

    Boram Kwon

    2016-09-01

    Full Text Available This study investigated the effects of interspecific and temporal variation of specific leaf area (SLA, cm2·g−1 on leaf area index (LAI estimation for three deciduous broadleaved forests (Gwangneung (GN, Taehwa (TH, and Gariwang (GRW in Korea with varying ages and composition of tree species. In fall of 2014, fallen leaves were periodically collected using litter traps and classified by species. LAI was estimated by obtaining SLAs using four calculation methods (A: including both interspecific and temporal variation in SLA; B: species specific mean SLA; C: period-specific mean SLA; and D: overall mean, then multiplying the SLAs by the amount of leaves. SLA varied across different species in all plots, and SLAs of upper canopy species were less than those of lower canopy species. The LAIs calculated using method A, the reference method, were GN 6.09, TH 5.42, and GRW 4.33. LAIs calculated using method B showed a difference of up to 3% from the LAI of method A, but LAIs calculated using methods C and D were overestimated. Therefore, species specific SLA must be considered for precise LAI estimation for broadleaved forests that include multiple species.

  16. Multi Temporal Interferometry as Tool for Urban Landslide Hazard Assessment

    Science.gov (United States)

    Vicari, A.; Colangelo, G.; Famiglietti, N.; Cecere, G.; Stramondo, S.; Viggiano, D.

    2017-12-01

    Advanced Synthetic Aperture Radar Differential Interferometry (A-DInSAR) are Multi Temporal Interferometry(MTI) techniques suitable for the monitoring of deformation phenomena in slow kinematics. A-DInSAR methodologies include both Coherence-based type, as well as Small Baseline Subset (SBAS) (Berardino et al., 2002, Lanari et al., 2004) and Persistent/Permanent Scatterers (PS), (Ferretti et al., 2001). Such techniques are capable to provide wide-area coverage (thousands of km2) and precise (mm-cm resolution), spatially dense information (from hundreds to thousands of measurementpoints/km2) on groundsurfacedeformations. SBAS and PShavebeenapplied to the town of Stigliano (MT) in Basilicata Region (Southern Italy), where the social center has been destroyed after the reactivation of a known landslide. The comparison of results has shown that these techniques are equivalent in terms of obtained coherent areas and displacement patterns, although lightly different velocity values for individual points (-5/-25 mm/y for PS vs. -5/-15 mm/y for SBAS) have been pointed out. Differences are probably due to scattering properties of the ground surface (e.g. Lauknes et al., 2010). Furthermore, on the crown of the landslide body, a Robotics Explorer Total Monitoring Station (Leica Nova TM50) that measures distance values with 0.6 mm of resolution has been installed. In particular, 20 different points corresponding to that identified through satellite techniques have been chosen, and a sampling time of 15 minutes has been fixed. The displacement values obtained are in agreement with the results of the MTI analysis, showing as these techniques could be a useful tool in the case of early - warning situations.

  17. Remote Sensing based multi-temporal observation of North Korea mining activities : A case study of Rakyeon mine

    Science.gov (United States)

    Lim, J. H.; Yu, J.; Koh, S. M.; Lee, G.

    2017-12-01

    Mining is a major industrial business of North Korea accounting for significant portion of an export for North Korean economy. However, due to its veiled political system, details of mining activities of North Korea is rarely known. This study investigated mining activities of Rakyeon Au-Ag mine, North Korea based on remote sensing based multi-temporal observation. To monitor the mining activities, CORONA data acquired in 1960s and 1970s, SPOT and Landsat data acquired in 1980s and 1990s and KOMPSAT-2 data acquired in 2010s are utilized. The results show that mining activities of Rakyeon mine continuously carried out for the observation period expanding tailing areas of the mine. However, its expanding rate varies between the period related to North Korea's economic and political situations.

  18. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    Science.gov (United States)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  19. Multi-component optical solitary waves

    DEFF Research Database (Denmark)

    Kivshar, Y. S.; Sukhorukov, A. A.; Ostrovskaya, E. A.

    2000-01-01

    We discuss several novel types of multi-component (temporal and spatial) envelope solitary waves that appear in fiber and waveguide nonlinear optics. In particular, we describe multi-channel solitary waves in bit-parallel-wavelength fiber transmission systems for highperformance computer networks......, multi-color parametric spatial solitary waves due to cascaded nonlinearities of quadratic materials, and quasiperiodic envelope solitons due to quasi-phase-matching in Fibonacci optical superlattices. (C) 2000 Elsevier Science B.V. All rights reserved....

  20. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database.

    Science.gov (United States)

    Choi, Joon Yul; Yoo, Tae Keun; Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek

    2017-01-01

    Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen's kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals.