WorldWideScience

Sample records for higher-resolution data-dependent selective

  1. Higher-resolution selective metallization on alumina substrate by laser direct writing and electroless plating

    International Nuclear Information System (INIS)

    Lv, Ming; Liu, Jianguo; Wang, Suhuan; Ai, Jun; Zeng, Xiaoyan

    2016-01-01

    Graphical abstract: - Highlights: • Mechanisms of laser direct writing and electroless plating were studied. • Active seeds in laser-irradiated zone and laser-affected zone were found to be different. • A special chemical cleaning method with aqua regia was taken. • Higher-resolution copper patterns on alumina ceramic were obtained conveniently. - Abstract: How to fabricate conductive patterns on ceramic boards with higher resolution is a challenge in the past years. The fabrication of copper patterns on alumina substrate by laser direct writing and electroless copper plating is a low cost and high efficiency method. Nevertheless, the lower resolution limits its further industrial applications in many fields. In this report, the mechanisms of laser direct writing and electroless copper plating were studied. The results indicated that as the decomposed products of precursor PdCl_2 have different chemical states respectively in laser-irradiated zone (LIZ) and laser-affected zone (LAZ). This phenomenon was utilized and a special chemical cleaning method with aqua regia solution was taken to selectively remove the metallic Pd in LAZ, while kept the PdO in LIZ as the only active seeds. As a result, the resolution of subsequent copper patterns was improved significantly. This technique has a great significance to develop the microelectronics devices.

  2. Higher-resolution selective metallization on alumina substrate by laser direct writing and electroless plating

    Science.gov (United States)

    Lv, Ming; Liu, Jianguo; Wang, Suhuan; Ai, Jun; Zeng, Xiaoyan

    2016-03-01

    How to fabricate conductive patterns on ceramic boards with higher resolution is a challenge in the past years. The fabrication of copper patterns on alumina substrate by laser direct writing and electroless copper plating is a low cost and high efficiency method. Nevertheless, the lower resolution limits its further industrial applications in many fields. In this report, the mechanisms of laser direct writing and electroless copper plating were studied. The results indicated that as the decomposed products of precursor PdCl2 have different chemical states respectively in laser-irradiated zone (LIZ) and laser-affected zone (LAZ). This phenomenon was utilized and a special chemical cleaning method with aqua regia solution was taken to selectively remove the metallic Pd in LAZ, while kept the PdO in LIZ as the only active seeds. As a result, the resolution of subsequent copper patterns was improved significantly. This technique has a great significance to develop the microelectronics devices.

  3. MR-CDF: Managing multi-resolution scientific data

    Science.gov (United States)

    Salem, Kenneth

    1993-01-01

    MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

  4. Monitoring crop leaf area index time variation from higher resolution remotely sensed data

    International Nuclear Information System (INIS)

    Jiao, Sihong

    2014-01-01

    The leaf area index (LAI) is significant for research on global climate change and ecological environment. China HJ-1 satellite has a revisit cycle of four days, providing CCD data (HJ-1 CCD) with a resolution of 30 m. However, the HJ-1 CCD is incapable of obtaining observations at multiple angles. This is problematic because single angle observations provide insufficient data for determining the LAI. This article proposes a new method for determining LAI using HJ-1 CCD data. The proposed method uses background knowledge of dynamic land surface processes that are extracted from MODerate resolution Imaging Spectroradiometer (MODIS) LAI 1-km resolution data. To process the uncertainties that arise from using two data sources with different spatial resolutions, the proposed method is implemented in a dynamitic Bayesian network scheme by integrating a LAI dynamic process model and a canopy reflectance model with remotely sensed data. Validation results showed that the determination coefficient between estimated and measured LAI was 0.791, and the RMSE was 0.61. This method can enhance the accuracy of the retrieval results while retaining the time series variation characteristics of the vegetation LAI. The results suggest that this algorithm can be widely applied to determining high-resolution leaf area indices using data from China HJ-1 satellite even if information from single angle observations are insufficient for quantitative application

  5. High-resolution numerical modeling of mesoscale island wakes and sensitivity to static topographic relief data

    Directory of Open Access Journals (Sweden)

    C. G. Nunalee

    2015-08-01

    Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.

  6. Effects of soil data resolution on SWAT model stream flow and water quality predictions.

    Science.gov (United States)

    Geza, Mengistu; McCray, John E

    2008-08-01

    The prediction accuracy of agricultural nonpoint source pollution models such as Soil and Water Assessment Tool (SWAT) depends on how well model input spatial parameters describe the characteristics of the watershed. The objective of this study was to assess the effects of different soil data resolutions on stream flow, sediment and nutrient predictions when used as input for SWAT. SWAT model predictions were compared for the two US Department of Agriculture soil databases with different resolution, namely the State Soil Geographic database (STATSGO) and the Soil Survey Geographic database (SSURGO). Same number of sub-basins was used in the watershed delineation. However, the number of HRUs generated when STATSGO and SSURGO soil data were used is 261 and 1301, respectively. SSURGO, with the highest spatial resolution, has 51 unique soil types in the watershed distributed in 1301 HRUs, while STATSGO has only three distributed in 261 HRUS. As a result of low resolution STATSGO assigns a single classification to areas that may have different soil types if SSURGO were used. SSURGO included Hydrologic Response Units (HRUs) with soil types that were generalized to one soil group in STATSGO. The difference in the number and size of HRUs also has an effect on sediment yield parameters (slope and slope length). Thus, as a result of the discrepancies in soil type and size of HRUs stream flow predicted was higher when SSURGO was used compared to STATSGO. SSURGO predicted less stream loading than STATSGO in terms of sediment and sediment-attached nutrients components, and vice versa for dissolved nutrients. When compared to mean daily measured flow, STATSGO performed better relative to SSURGO before calibration. SSURGO provided better results after calibration as evaluated by R(2) value (0.74 compared to 0.61 for STATSGO) and the Nash-Sutcliffe coefficient of Efficiency (NSE) values (0.70 and 0.61 for SSURGO and STATSGO, respectively) although both are in the same satisfactory

  7. A conformation-dependent stereochemical library improves crystallographic refinement even at atomic resolution

    International Nuclear Information System (INIS)

    Tronrud, Dale E.; Karplus, P. Andrew

    2011-01-01

    A script was created to allow SHELXL to use the new CDL v.1.2 stereochemical library which defines the target values for main-chain bond lengths and angles as a function of the residue’s ϕ/ψ angles. Test refinements using this script show that the refinement behavior of structures at resolutions even better than 1 Å is substantially enhanced by the use of the new conformation-dependent ideal geometry paradigm. To utilize a new conformation-dependent backbone-geometry library (CDL) in protein refinements at atomic resolution, a script was written that creates a restraint file for the SHELXL refinement program. It was found that the use of this library allows models to be created that have a substantially better fit to main-chain bond angles and lengths without degrading their fit to the X-ray data even at resolutions near 1 Å. For models at much higher resolution (∼0.7 Å), the refined model for parts adopting single well occupied positions is largely independent of the restraints used, but these structures still showed much smaller r.m.s.d. residuals when assessed with the CDL. Examination of the refinement tests across a wide resolution range from 2.4 to 0.65 Å revealed consistent behavior supporting the use of the CDL as a next-generation restraint library to improve refinement. CDL restraints can be generated using the service at http://pgd.science.oregonstate.edu/cdl_shelxl/

  8. Resolution analyses for selecting an appropriate airborne electromagnetic (AEM) system

    DEFF Research Database (Denmark)

    Christensen, N.B.; Lawrie, Ken

    2012-01-01

    is necessary and has to be approached in a pragmatic way involving a range of different aspects. In this paper, we concentrate on the resolution analysis perspective and demonstrate that the inversion analysis must be preferred over the derivative analysis because it takes parameter coupling into account, and...... resolution for a series of models relevant to the survey area by comparing the sum over the data of squares of noise-normalised derivatives. We compare this analysis method with a resolution analysis based on the posterior covariance matrix of an inversion formulation. Both of the above analyses depend......, furthermore, that the derivative analysis generally overestimates the resolution capability. Finally we show that impulse response data are to be preferred over step response data for near-surface resolution....

  9. Methodology for Clustering High-Resolution Spatiotemporal Solar Resource Data

    Energy Technology Data Exchange (ETDEWEB)

    Getman, Dan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-09-01

    In this report, we introduce a methodology to achieve multiple levels of spatial resolution reduction of solar resource data, with minimal impact on data variability, for use in energy systems modeling. The selection of an appropriate clustering algorithm, parameter selection including cluster size, methods of temporal data segmentation, and methods of cluster evaluation are explored in the context of a repeatable process. In describing this process, we illustrate the steps in creating a reduced resolution, but still viable, dataset to support energy systems modeling, e.g. capacity expansion or production cost modeling. This process is demonstrated through the use of a solar resource dataset; however, the methods are applicable to other resource data represented through spatiotemporal grids, including wind data. In addition to energy modeling, the techniques demonstrated in this paper can be used in a novel top-down approach to assess renewable resources within many other contexts that leverage variability in resource data but require reduction in spatial resolution to accommodate modeling or computing constraints.

  10. HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation

    Science.gov (United States)

    Guo, Shuhang; Wang, Jian; Wang, Tong

    2017-09-01

    Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.

  11. Long-Term, High-Resolution Survey of Atmospheric Aerosols over Egypt with NASA’s MODIS Data

    Directory of Open Access Journals (Sweden)

    Mohammed Shokr

    2017-10-01

    Full Text Available A decadal survey of atmospheric aerosols over Egypt and selected cities and regions is presented using daily aerosol optical depth (AOD data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS at 550 nm wavelength onboard the Aqua satellite. We explore the AOD spatio-temporal variations over Egypt during a 12-year record (2003 to 2014 using the MODIS high-resolution (10 km Level 2 data product. Five cities and two geographic regions that feature different landscape and human activities were selected for detailed analysis. For most of the examined areas, AOD is found to be most frequent in the 0.2–0.3 range, and the highest mean AOD was found to be over Cairo, Alexandria, and the Nile Delta region. Severe events are identified based on AOD higher than a selected threshold. Most of these events are engendered by sand and dust storms that originate from the Western Desert during January–April. Spatial analysis indicates that they cover the Nile Delta region, including cities of Cairo and Alexandria, on the same day. Examination of the spatial gradient of AOD along the four cardinal directions originating from the city’s center reveals seasonally dependent gradients in some cases. The gradients have been linked to locations of industrial activity. No trend of AOD has been observed in the studied areas during the study period, though data from Cairo and Asyut reveal a slight linear increase of AOD. Considering Cairo is commonly perceived as a city of poor air quality, the results show that local events are fairly constrained. The study highlights spatial and seasonal distributions of AOD and links them to geographic and climatic conditions across the country.

  12. Investigation of spatial resolution dependent variability in transcutaneous oxygen saturation using point spectroscopy system

    Science.gov (United States)

    Philimon, Sheena P.; Huong, Audrey K. C.; Ngu, Xavier T. I.

    2017-08-01

    This paper aims to investigate the variation in one’s percent mean transcutaneous oxygen saturation (StO2) with differences in spatial resolution of data. This work required the knowledge of extinction coefficient of hemoglobin derivatives in the wavelength range of 520 - 600 nm to solve for the StO2 value via an iterative fitting procedure. A pilot study was conducted on three healthy subjects with spectroscopic data collected from their right index finger at different arbitrarily selected distances. The StO2 value estimated by Extended Modified Lambert Beer (EMLB) model revealed a higher mean StO2 of 91.1 ± 1.3% at a proximity distance of 30 mm compared to 60.83 ± 2.8% at 200 mm. The results showed a high correlation between data spatial resolution and StO2 value, and revealed a decrease in StO2 value as the sampling distance increased. The preliminary findings from this study contribute to the knowledge of the appropriate distance range for consistent and high repeatability measurement of skin oxygenation.

  13. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  14. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  15. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  16. Extended Opacity Tables with Higher Temperature-Density-Frequency Resolution

    Science.gov (United States)

    Schillaci, Mark; Orban, Chris; Delahaye, Franck; Pinsonneault, Marc; Nahar, Sultana; Pradhan, Anil

    2015-05-01

    Theoretical models for plasma opacities underpin our understanding of radiation transport in many different astrophysical objects. These opacity models are also relevant to HEDP experiments such as ignition scale experiments on NIF. We present a significantly expanded set of opacity data from the widely utilized Opacity Project, and make these higher resolution data publicly available through OSU's portal with dropbox.com. This expanded data set is used to assess how accurate the interpolation of opacity data in temperature-density-frequency dimensions must be in order to adequately model the properties of most stellar types. These efforts are the beginning of a larger project to improve the theoretical opacity models in light of experimental results at the Sandia Z-pinch showing that the measured opacity of Iron disagrees strongly with all current models.

  17. Three-dimensional inversion recovery manganese-enhanced MRI of mouse brain using super-resolution reconstruction to visualize nuclei involved in higher brain function.

    Science.gov (United States)

    Poole, Dana S; Plenge, Esben; Poot, Dirk H J; Lakke, Egbert A J F; Niessen, Wiro J; Meijering, Erik; van der Weerd, Louise

    2014-07-01

    The visualization of activity in mouse brain using inversion recovery spin echo (IR-SE) manganese-enhanced MRI (MEMRI) provides unique contrast, but suffers from poor resolution in the slice-encoding direction. Super-resolution reconstruction (SRR) is a resolution-enhancing post-processing technique in which multiple low-resolution slice stacks are combined into a single volume of high isotropic resolution using computational methods. In this study, we investigated, first, whether SRR can improve the three-dimensional resolution of IR-SE MEMRI in the slice selection direction, whilst maintaining or improving the contrast-to-noise ratio of the two-dimensional slice stacks. Second, the contrast-to-noise ratio of SRR IR-SE MEMRI was compared with a conventional three-dimensional gradient echo (GE) acquisition. Quantitative experiments were performed on a phantom containing compartments of various manganese concentrations. The results showed that, with comparable scan times, the signal-to-noise ratio of three-dimensional GE acquisition is higher than that of SRR IR-SE MEMRI. However, the contrast-to-noise ratio between different compartments can be superior with SRR IR-SE MEMRI, depending on the chosen inversion time. In vivo experiments were performed in mice receiving manganese using an implanted osmotic pump. The results showed that SRR works well as a resolution-enhancing technique in IR-SE MEMRI experiments. In addition, the SRR image also shows a number of brain structures that are more clearly discernible from the surrounding tissues than in three-dimensional GE acquisition, including a number of nuclei with specific higher brain functions, such as memory, stress, anxiety and reward behavior. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Cerebral Cortex Regions Selectively Vulnerable to Radiation Dose-Dependent Atrophy

    Energy Technology Data Exchange (ETDEWEB)

    Seibert, Tyler M.; Karunamuni, Roshan; Kaifi, Samar; Burkeen, Jeffrey; Connor, Michael [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States); Krishnan, Anitha Priya; White, Nathan S.; Farid, Nikdokht; Bartsch, Hauke [Department of Radiology, University of California, San Diego, La Jolla, California (United States); Murzin, Vyacheslav [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States); Nguyen, Tanya T. [Department of Psychiatry, University of California, San Diego, La Jolla, California (United States); Moiseenko, Vitali [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States); Brewer, James B. [Department of Radiology, University of California, San Diego, La Jolla, California (United States); Department of Neurosciences, University of California, San Diego, La Jolla, California (United States); McDonald, Carrie R. [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States); Department of Psychiatry, University of California, San Diego, La Jolla, California (United States); Dale, Anders M. [Department of Radiology, University of California, San Diego, La Jolla, California (United States); Department of Psychiatry, University of California, San Diego, La Jolla, California (United States); Department of Neurosciences, University of California, San Diego, La Jolla, California (United States); Hattangadi-Gluth, Jona A., E-mail: jhattangadi@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States)

    2017-04-01

    Purpose and Objectives: Neurologic deficits after brain radiation therapy (RT) typically involve decline in higher-order cognitive functions such as attention and memory rather than sensory defects or paralysis. We sought to determine whether areas of the cortex critical to cognition are selectively vulnerable to radiation dose-dependent atrophy. Methods and Materials: We measured change in cortical thickness in 54 primary brain tumor patients who underwent fractionated, partial brain RT. The study patients underwent high-resolution, volumetric magnetic resonance imaging (T1-weighted; T2 fluid-attenuated inversion recovery, FLAIR) before RT and 1 year afterward. Semiautomated software was used to segment anatomic regions of the cerebral cortex for each patient. Cortical thickness was measured for each region before RT and 1 year afterward. Two higher-order cortical regions of interest (ROIs) were tested for association between radiation dose and cortical thinning: entorhinal (memory) and inferior parietal (attention/memory). For comparison, 2 primary cortex ROIs were also tested: pericalcarine (vision) and paracentral lobule (somatosensory/motor). Linear mixed-effects analyses were used to test all other cortical regions for significant radiation dose-dependent thickness change. Statistical significance was set at α = 0.05 using 2-tailed tests. Results: Cortical atrophy was significantly associated with radiation dose in the entorhinal (P=.01) and inferior parietal ROIs (P=.02). By contrast, no significant radiation dose-dependent effect was found in the primary cortex ROIs (pericalcarine and paracentral lobule). In the whole-cortex analysis, 9 regions showed significant radiation dose-dependent atrophy, including areas responsible for memory, attention, and executive function (P≤.002). Conclusions: Areas of cerebral cortex important for higher-order cognition may be most vulnerable to radiation-related atrophy. This is consistent with clinical observations

  19. Cerebral Cortex Regions Selectively Vulnerable to Radiation Dose-Dependent Atrophy

    International Nuclear Information System (INIS)

    Seibert, Tyler M.; Karunamuni, Roshan; Kaifi, Samar; Burkeen, Jeffrey; Connor, Michael; Krishnan, Anitha Priya; White, Nathan S.; Farid, Nikdokht; Bartsch, Hauke; Murzin, Vyacheslav; Nguyen, Tanya T.; Moiseenko, Vitali; Brewer, James B.; McDonald, Carrie R.; Dale, Anders M.; Hattangadi-Gluth, Jona A.

    2017-01-01

    Purpose and Objectives: Neurologic deficits after brain radiation therapy (RT) typically involve decline in higher-order cognitive functions such as attention and memory rather than sensory defects or paralysis. We sought to determine whether areas of the cortex critical to cognition are selectively vulnerable to radiation dose-dependent atrophy. Methods and Materials: We measured change in cortical thickness in 54 primary brain tumor patients who underwent fractionated, partial brain RT. The study patients underwent high-resolution, volumetric magnetic resonance imaging (T1-weighted; T2 fluid-attenuated inversion recovery, FLAIR) before RT and 1 year afterward. Semiautomated software was used to segment anatomic regions of the cerebral cortex for each patient. Cortical thickness was measured for each region before RT and 1 year afterward. Two higher-order cortical regions of interest (ROIs) were tested for association between radiation dose and cortical thinning: entorhinal (memory) and inferior parietal (attention/memory). For comparison, 2 primary cortex ROIs were also tested: pericalcarine (vision) and paracentral lobule (somatosensory/motor). Linear mixed-effects analyses were used to test all other cortical regions for significant radiation dose-dependent thickness change. Statistical significance was set at α = 0.05 using 2-tailed tests. Results: Cortical atrophy was significantly associated with radiation dose in the entorhinal (P=.01) and inferior parietal ROIs (P=.02). By contrast, no significant radiation dose-dependent effect was found in the primary cortex ROIs (pericalcarine and paracentral lobule). In the whole-cortex analysis, 9 regions showed significant radiation dose-dependent atrophy, including areas responsible for memory, attention, and executive function (P≤.002). Conclusions: Areas of cerebral cortex important for higher-order cognition may be most vulnerable to radiation-related atrophy. This is consistent with clinical observations

  20. Time Resolution Dependence of Information Measures for Spiking Neurons: Scaling and Universality

    Directory of Open Access Journals (Sweden)

    James P Crutchfield

    2015-08-01

    Full Text Available The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is todevelop information measures for individual output processes, including information generation (entropy rate, stored information (statisticalcomplexity, predictable information (excess entropy, and active information accumulation (bound information rate. We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., $tau$-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.

  1. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    Science.gov (United States)

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  2. Super-resolution biomolecular crystallography with low-resolution data.

    Science.gov (United States)

    Schröder, Gunnar F; Levitt, Michael; Brunger, Axel T

    2010-04-22

    X-ray diffraction plays a pivotal role in the understanding of biological systems by revealing atomic structures of proteins, nucleic acids and their complexes, with much recent interest in very large assemblies like the ribosome. As crystals of such large assemblies often diffract weakly (resolution worse than 4 A), we need methods that work at such low resolution. In macromolecular assemblies, some of the components may be known at high resolution, whereas others are unknown: current refinement methods fail as they require a high-resolution starting structure for the entire complex. Determining the structure of such complexes, which are often of key biological importance, should be possible in principle as the number of independent diffraction intensities at a resolution better than 5 A generally exceeds the number of degrees of freedom. Here we introduce a method that adds specific information from known homologous structures but allows global and local deformations of these homology models. Our approach uses the observation that local protein structure tends to be conserved as sequence and function evolve. Cross-validation with R(free) (the free R-factor) determines the optimum deformation and influence of the homology model. For test cases at 3.5-5 A resolution with known structures at high resolution, our method gives significant improvements over conventional refinement in the model as monitored by coordinate accuracy, the definition of secondary structure and the quality of electron density maps. For re-refinements of a representative set of 19 low-resolution crystal structures from the Protein Data Bank, we find similar improvements. Thus, a structure derived from low-resolution diffraction data can have quality similar to a high-resolution structure. Our method is applicable to the study of weakly diffracting crystals using X-ray micro-diffraction as well as data from new X-ray light sources. Use of homology information is not restricted to X

  3. Visualisation of very high resolution Martian topographic data and its application on landing site selection and rover route navigation

    Science.gov (United States)

    Kim, J.; Lin, S.; Hong, J.; Park, D.; Yoon, S.; Kim, Y.

    2010-12-01

    High resolution satellite imagery acquired from orbiters are able to provide detailed topographic information and therefore are recognised as an important tool for investigating planetary and terrestrial topography. The heritage of in-orbit high resolution imaging technology is now implemented in a series of Martian Missions, such as HiRISE (High Resolution Imaging Science Experiment) and CTX (Context Camera) onboard the MRO (Mars Reconnaissance Orbiter). In order to fully utilise the data derived from image systems carried on various Mars orbiters, the generalised algorithms of image processing and photogrammetric Mars DTM extraction have been developed and implemented by Kim and Muller (2009), in which non-rigorous sensor model and hierarchical geomatics control were employed. Due to the successful “from medium to high” control strategy performed during processing, stable horizontal and vertical photogrammetric accuracy of resultant Mars DTM was achievable when compared with MOLA (Mars Obiter Laser Altimeter) DTM. Recently, the algorithms developed in Kim and Muller (2009) were further updated by employing advanced image matcher and improved sensor model. As the photogrammetric qualities of the updated topographic products are verified and the spatial solution can be up to sub-meter scale, they are of great value to be exploited for Martian rover landing site selection and rover route navigation. To this purpose, the DTMs and ortho-rectified imagery obtained from CTX and HiRISE covering potential future rovers and existing MER (Mars Exploration Rover) landing sites were firstly processed. For landing site selection, the engineering constraints such as slope and surface roughness were computed from DTMs. In addition, the combination of virtual topography and the estimated rover location was able to produce a sophisticated environment simulation of rover’s landing site. Regarding the rover navigation, the orbital DTMs and the images taken from cameras

  4. Oxidative burst-dependent NETosis is implicated in the resolution of necrosis-associated sterile inflammation

    Directory of Open Access Journals (Sweden)

    Mona Helena Biermann

    2016-12-01

    Full Text Available Necrosis is associated with a profound inflammatory response. The regulation of necrosis-associated inflammation, particularly the mechanisms responsible for resolution of inflammation are incompletely characterized. Nanoparticles are known to induce plasma membrane damage and necrosis followed by sterile inflammation. We observed that injection of metabolically inert nanodiamonds resulted in paw edema in WT and Ncf1** mice. However, while inflammation quickly resolved in WT mice, it persisted over several weeks in Ncf1** mice indicating failure of resolution of inflammation. Mechanistically, NOX2-dependent reactive oxygen species (ROS production and formation of neutrophil extracellular traps (NETs were essential for the resolution of necrosis-induced inflammation: Hence, by evaluating the fate of the particles at the site of inflammation, we observed that Ncf1** mice deficient in NADPH-dependent ROS failed to generate granulation tissue therefore being unable to trap the nanodiamonds. These data suggest that NOX2-dependent NETosis is crucial for preventing the chronification of the inflammatory response to tissue necrosis by forming NETosis-dependent barriers between the necrotic and healthy surrounding tissue.

  5. Calibrated, Enhanced-Resolution Brightness Temperature Earth System Data Record: A New Era for Gridded Passive Microwave Data

    Science.gov (United States)

    Hardman, M.; Brodzik, M. J.; Long, D. G.

    2017-12-01

    Since 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Up until recently, the available global gridded passive microwave data sets have not been produced consistently. Various projections (equal-area, polar stereographic), a number of different gridding techniques were used, along with various temporal sampling as well as a mix of Level 2 source data versions. In addition, not all data from all sensors have been processed completely and they have not been processed in any one consistent way. Furthermore, the original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. As part of NASA MEaSUREs, we have re-processed all data from SMMR, all SSM/I-SSMIS and AMSR-E instruments, using the most mature Level 2 data. The Calibrated, Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) gridded data are now available from the NSIDC DAAC. The data are distributed as netCDF files that comply with CF-1.6 and ACDD-1.3 conventions. The data have been produced on EASE 2.0 projections at smoothed, 25 kilometer resolution and spatially-enhanced resolutions, up to 3.125 km depending on channel frequency, using the radiometer version of the Scatterometer Image Reconstruction (rSIR) method. We expect this newly produced data set to enable scientists to better analyze trends in coastal regions, marginal ice zones and in mountainous terrain that were not possible with the previous gridded passive microwave data. The use of the EASE-Grid 2.0 definition and netCDF-CF formatting allows users to extract compliant geotiff images and

  6. Analysis strategies for high-resolution UHF-fMRI data.

    Science.gov (United States)

    Polimeni, Jonathan R; Renvall, Ville; Zaretskaya, Natalia; Fischl, Bruce

    2018-03-01

    Functional MRI (fMRI) benefits from both increased sensitivity and specificity with increasing magnetic field strength, making it a key application for Ultra-High Field (UHF) MRI scanners. Most UHF-fMRI studies utilize the dramatic increases in sensitivity and specificity to acquire high-resolution data reaching sub-millimeter scales, which enable new classes of experiments to probe the functional organization of the human brain. This review article surveys advanced data analysis strategies developed for high-resolution fMRI at UHF. These include strategies designed to mitigate distortion and artifacts associated with higher fields in ways that attempt to preserve spatial resolution of the fMRI data, as well as recently introduced analysis techniques that are enabled by these extremely high-resolution data. Particular focus is placed on anatomically-informed analyses, including cortical surface-based analysis, which are powerful techniques that can guide each step of the analysis from preprocessing to statistical analysis to interpretation and visualization. New intracortical analysis techniques for laminar and columnar fMRI are also reviewed and discussed. Prospects for single-subject individualized analyses are also presented and discussed. Altogether, there are both specific challenges and opportunities presented by UHF-fMRI, and the use of proper analysis strategies can help these valuable data reach their full potential. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Sediment delivery estimates in water quality models altered by resolution and source of topographic data.

    Science.gov (United States)

    Beeson, Peter C; Sadeghi, Ali M; Lang, Megan W; Tomer, Mark D; Daughtry, Craig S T

    2014-01-01

    Moderate-resolution (30-m) digital elevation models (DEMs) are normally used to estimate slope for the parameterization of non-point source, process-based water quality models. These models, such as the Soil and Water Assessment Tool (SWAT), use the Universal Soil Loss Equation (USLE) and Modified USLE to estimate sediment loss. The slope length and steepness factor, a critical parameter in USLE, significantly affects sediment loss estimates. Depending on slope range, a twofold difference in slope estimation potentially results in as little as 50% change or as much as 250% change in the LS factor and subsequent sediment estimation. Recently, the availability of much finer-resolution (∼3 m) DEMs derived from Light Detection and Ranging (LiDAR) data has increased. However, the use of these data may not always be appropriate because slope values derived from fine spatial resolution DEMs are usually significantly higher than slopes derived from coarser DEMs. This increased slope results in considerable variability in modeled sediment output. This paper addresses the implications of parameterizing models using slope values calculated from DEMs with different spatial resolutions (90, 30, 10, and 3 m) and sources. Overall, we observed over a 2.5-fold increase in slope when using a 3-m instead of a 90-m DEM, which increased modeled soil loss using the USLE calculation by 130%. Care should be taken when using LiDAR-derived DEMs to parameterize water quality models because doing so can result in significantly higher slopes, which considerably alter modeled sediment loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  8. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  9. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    Energy Technology Data Exchange (ETDEWEB)

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  10. Higher spin resolution of a toy big bang

    Science.gov (United States)

    Krishnan, Chethan; Roy, Shubho

    2013-08-01

    Diffeomorphisms preserve spacetime singularities, whereas higher spin symmetries need not. Since three-dimensional de Sitter space has quotients that have big-bang/big-crunch singularities and since dS3-gravity can be written as an SL(2,C) Chern-Simons theory, we investigate SL(3,C) Chern-Simons theory as a higher-spin context in which these singularities might get resolved. As in the case of higher spin black holes in AdS3, the solutions are invariantly characterized by their holonomies. We show that the dS3 quotient singularity can be desingularized by an SL(3,C) gauge transformation that preserves the holonomy: this is a higher spin resolution the cosmological singularity. Our work deals exclusively with the bulk theory, and is independent of the subtleties involved in defining a CFT2 dual to dS3 in the sense of dS/CFT.

  11. Computed tomography with selectable image resolution

    International Nuclear Information System (INIS)

    Dibianca, F.A.; Dallapiazza, D.G.

    1981-01-01

    A computed tomography system x-ray detector has a central group of half-width detector elements and groups of full-width elements on each side of the central group. To obtain x-ray attenuation data for whole body layers, the half-width elements are switched effectively into paralleled pairs so all elements act like full-width elements and an image of normal resolution is obtained. For narrower head layers, the elements in the central group are used as half-width elements so resolution which is twice as great as normal is obtained. The central group is also used in the half-width mode and the outside groups are used in the full-width mode to obtain a high resolution image of a body zone within a full body layer. In one embodiment data signals from the detector are switched by electronic multiplexing and in another embodiment a processor chooses the signals for the various kinds of images that are to be reconstructed. (author)

  12. Spatial resolution dependence on spectral frequency in human speech cortex electrocorticography

    Science.gov (United States)

    Muller, Leah; Hamilton, Liberty S.; Edwards, Erik; Bouchard, Kristofer E.; Chang, Edward F.

    2016-10-01

    Objective. Electrocorticography (ECoG) has become an important tool in human neuroscience and has tremendous potential for emerging applications in neural interface technology. Electrode array design parameters are outstanding issues for both research and clinical applications, and these parameters depend critically on the nature of the neural signals to be recorded. Here, we investigate the functional spatial resolution of neural signals recorded at the human cortical surface. We empirically derive spatial spread functions to quantify the shared neural activity for each frequency band of the electrocorticogram. Approach. Five subjects with high-density (4 mm center-to-center spacing) ECoG grid implants participated in speech perception and production tasks while neural activity was recorded from the speech cortex, including superior temporal gyrus, precentral gyrus, and postcentral gyrus. The cortical surface field potential was decomposed into traditional EEG frequency bands. Signal similarity between electrode pairs for each frequency band was quantified using a Pearson correlation coefficient. Main results. The correlation of neural activity between electrode pairs was inversely related to the distance between the electrodes; this relationship was used to quantify spatial falloff functions for cortical subdomains. As expected, lower frequencies remained correlated over larger distances than higher frequencies. However, both the envelope and phase of gamma and high gamma frequencies (30-150 Hz) are largely uncorrelated (<90%) at 4 mm, the smallest spacing of the high-density arrays. Thus, ECoG arrays smaller than 4 mm have significant promise for increasing signal resolution at high frequencies, whereas less additional gain is achieved for lower frequencies. Significance. Our findings quantitatively demonstrate the dependence of ECoG spatial resolution on the neural frequency of interest. We demonstrate that this relationship is consistent across patients and

  13. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  14. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    International Nuclear Information System (INIS)

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P.A.; Schmid, Adrien W.

    2016-01-01

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  15. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    Science.gov (United States)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  16. Psychosocial Maturity and Conflict Resolution Management of Higher Secondary School Students

    Science.gov (United States)

    Jaseena M.P.M., Fathima; P., Divya

    2014-01-01

    The aim of the study is to find out the extent and difference in the mean scores of Psychosocial Maturity and Conflict Resolution Management of Higher secondary school students of Kerala. A survey technique was used for the study. Sample consists of 685 higher secondary students by giving due representation other criteria. Findings revealed that…

  17. Studies of the Silicon Tracker resolution using data

    CERN Document Server

    van Tilburg, J

    2010-01-01

    Several parameters that influence the hit resolution of the Silicon Tracker have been determined from data. These include charge sharing, cross talk and Lorentz deflection. A charge sharing width of ~4 $\\mu$m has been measured. No charge loss has been observed in the interstrip region. The cross talk to the neighbouring strips is found to vary between 4 − 14%, depending on the total capacitance (sensors plus cable), on whether it is the left or right neighbour and on the Beetle channel number (odd or even). The Lorentz deflection was also investigated and was observed to be small. Finally, the new parameters have been inserted in the LHCb Monte Carlo simulation to update the $\\eta$-correction functions required for the reconstruction of tracks. Compared to the previous tuning the hit resolution in the simulation has increased from ~35 $\\mu$m to ~50 $\\mu$m.

  18. NASA Prediction of Worldwide Energy Resource High Resolution Meteorology Data For Sustainable Building Design

    Science.gov (United States)

    Chandler, William S.; Hoell, James M.; Westberg, David; Zhang, Taiping; Stackhouse, Paul W., Jr.

    2013-01-01

    A primary objective of NASA's Prediction of Worldwide Energy Resource (POWER) project is to adapt and infuse NASA's solar and meteorological data into the energy, agricultural, and architectural industries. Improvements are continuously incorporated when higher resolution and longer-term data inputs become available. Climatological data previously provided via POWER web applications were three-hourly and 1x1 degree latitude/longitude. The NASA Modern Era Retrospective-analysis for Research and Applications (MERRA) data set provides higher resolution data products (hourly and 1/2x1/2 degree) covering the entire globe. Currently POWER solar and meteorological data are available for more than 30 years on hourly (meteorological only), daily, monthly and annual time scales. These data may be useful to several renewable energy sectors: solar and wind power generation, agricultural crop modeling, and sustainable buildings. A recent focus has been working with ASHRAE to assess complementing weather station data with MERRA data. ASHRAE building design parameters being investigated include heating/cooling degree days and climate zones.

  19. Resolution of Reflection Seismic Data Revisited

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Zunino, Andrea

    The Rayleigh Principle states that the minimum separation between two reflectors that allows them to be visually separated is the separation where the wavelet maxima from the two superimposed reflections combine into one maximum. This happens around Δtres = λb/8, where λb is the predominant...... lower vertical resolution of reflection seismic data. In the following we will revisit think layer model and demonstrate that there is in practice no limit to the vertical resolution using the parameterization of Widess (1973), and that the vertical resolution is limited by the noise in the data...

  20. Working memory differences in long-distance dependency resolution

    Directory of Open Access Journals (Sweden)

    Bruno eNicenboim

    2015-03-01

    Full Text Available There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005. In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013. Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008 or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005.We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii increasing dependency length by interposing material that modifies the head of the dependency (the verb produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002. In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component.

  1. Working memory differences in long-distance dependency resolution

    Science.gov (United States)

    Nicenboim, Bruno; Vasishth, Shravan; Gattei, Carolina; Sigman, Mariano; Kliegl, Reinhold

    2015-01-01

    There is a wealth of evidence showing that increasing the distance between an argument and its head leads to more processing effort, namely, locality effects; these are usually associated with constraints in working memory (DLT: Gibson, 2000; activation-based model: Lewis and Vasishth, 2005). In SOV languages, however, the opposite effect has been found: antilocality (see discussion in Levy et al., 2013). Antilocality effects can be explained by the expectation-based approach as proposed by Levy (2008) or by the activation-based model of sentence processing as proposed by Lewis and Vasishth (2005). We report an eye-tracking and a self-paced reading study with sentences in Spanish together with measures of individual differences to examine the distinction between expectation- and memory-based accounts, and within memory-based accounts the further distinction between DLT and the activation-based model. The experiments show that (i) antilocality effects as predicted by the expectation account appear only for high-capacity readers; (ii) increasing dependency length by interposing material that modifies the head of the dependency (the verb) produces stronger facilitation than increasing dependency length with material that does not modify the head; this is in agreement with the activation-based model but not with the expectation account; and (iii) a possible outcome of memory load on low-capacity readers is the increase in regressive saccades (locality effects as predicted by memory-based accounts) or, surprisingly, a speedup in the self-paced reading task; the latter consistent with good-enough parsing (Ferreira et al., 2002). In sum, the study suggests that individual differences in working memory capacity play a role in dependency resolution, and that some of the aspects of dependency resolution can be best explained with the activation-based model together with a prediction component. PMID:25852623

  2. Using Deep Learning for Targeted Data Selection, Improving Satellite Observation Utilization for Model Initialization

    Science.gov (United States)

    Lee, Y. J.; Bonfanti, C. E.; Trailovic, L.; Etherton, B.; Govett, M.; Stewart, J.

    2017-12-01

    At present, a fraction of all satellite observations are ultimately used for model assimilation. The satellite data assimilation process is computationally expensive and data are often reduced in resolution to allow timely incorporation into the forecast. This problem is only exacerbated by the recent launch of Geostationary Operational Environmental Satellite (GOES)-16 satellite and future satellites providing several order of magnitude increase in data volume. At the NOAA Earth System Research Laboratory (ESRL) we are researching the use of machine learning the improve the initial selection of satellite data to be used in the model assimilation process. In particular, we are investigating the use of deep learning. Deep learning is being applied to many image processing and computer vision problems with great success. Through our research, we are using convolutional neural network to find and mark regions of interest (ROI) to lead to intelligent extraction of observations from satellite observation systems. These targeted observations will be used to improve the quality of data selected for model assimilation and ultimately improve the impact of satellite data on weather forecasts. Our preliminary efforts to identify the ROI's are focused in two areas: applying and comparing state-of-art convolutional neural network models using the analysis data from the National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) weather model, and using these results as a starting point to optimize convolution neural network model for pattern recognition on the higher resolution water vapor data from GOES-WEST and other satellite. This presentation will provide an introduction to our convolutional neural network model to identify and process these ROI's, along with the challenges of data preparation, training the model, and parameter optimization.

  3. density-dependent selection revisited

    Indian Academy of Sciences (India)

    Unknown

    is a more useful way of looking at density-dependent selection, and then go on ... these models was that the condition for maintenance of ... In a way, their formulation may be viewed as ... different than competition among species, and typical.

  4. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    Science.gov (United States)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  5. Gender and Conflict Resolution Strategies in Spanish Teen Couples: Their Relationship With Jealousy and Emotional Dependency.

    Science.gov (United States)

    Perles, Fabiola; San Martín, Jesús; Canto, Jesús M

    2016-06-08

    Previous research has pointed to the need to address the study of violence in teen couples. However, research has not delved into the study of the variables related to the different types of violence employed by boys and girls. The purpose of this study was to test whether gender, jealousy, and dependency predict specific strategies for conflict resolution (psychological aggression and mild physical aggression). Another objective of the study was to test gender differences in the conflict resolution strategies used by Spanish teen couples and to test the association between these variables and jealousy and emotional dependency. A sample of 296 adolescent high school students between 14 and 19 years of age of both genders from the south of Spain participated in this study. Hierarchical regression models were used to estimate the relationship between psychological aggression and mild physical aggression, and jealousy, and dependency. Results showed that jealousy correlated with psychological aggression and mild physical aggression in girls but not in boys. Psychological aggression and mild physical aggression were associated with dependency in boys. Girls scored higher in psychological aggression and jealousy than did boys. Finally, the interaction between jealousy and dependency predicted psychological aggression only in girls. These results highlight the need to address the role of the interaction between dependence and jealousy in the types of violence employed in teen dating. However, it is necessary to delve into the gender differences and similarities to develop appropriate prevention programs. © The Author(s) 2016.

  6. 14 CFR 17.35 - Selection of neutrals for the alternative dispute resolution process.

    Science.gov (United States)

    2010-01-01

    ... dispute resolution process. 17.35 Section 17.35 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES PROCEDURES FOR PROTESTS AND CONTRACTS DISPUTES Alternative Dispute Resolution § 17.35 Selection of neutrals for the alternative dispute resolution process. (a) In...

  7. Selective maintenance for multi-state series–parallel systems under economic dependence

    International Nuclear Information System (INIS)

    Dao, Cuong D.; Zuo, Ming J.; Pandey, Mayank

    2014-01-01

    This paper presents a study on selective maintenance for multi-state series–parallel systems with economically dependent components. In the selective maintenance problem, the maintenance manager has to decide which components should receive maintenance activities within a finite break between missions. All the system reliabilities in the next operating mission, the available budget and the maintenance time for each component from its current state to a higher state are taken into account in the optimization models. In addition, the components in series–parallel systems are considered to be economically dependent. Time and cost savings will be achieved when several components are simultaneously repaired in a selective maintenance strategy. As the number of repaired components increases, the saved time and cost will also increase due to the share of setting up between components and another additional reduction amount resulting from the repair of multiple identical components. Different optimization models are derived to find the best maintenance strategy for multi-state series–parallel systems. A genetic algorithm is used to solve the optimization models. The decision makers may select different components to be repaired to different working states based on the maintenance objective, resource availabilities and how dependent the repair time and cost of each component are

  8. A multi-resolution HEALPix data structure for spherically mapped point data

    Directory of Open Access Journals (Sweden)

    Robert W. Youngren

    2017-06-01

    Full Text Available Data describing entities with locations that are points on a sphere are described as spherically mapped. Several data structures designed for spherically mapped data have been developed. One of them, known as Hierarchical Equal Area iso-Latitude Pixelization (HEALPix, partitions the sphere into twelve diamond-shaped equal-area base cells and then recursively subdivides each cell into four diamond-shaped subcells, continuing to the desired level of resolution. Twelve quadtrees, one associated with each base cell, store the data records associated with that cell and its subcells.HEALPix has been used successfully for numerous applications, notably including cosmic microwave background data analysis. However, for applications involving sparse point data HEALPix has possible drawbacks, including inefficient memory utilization, overwriting of proximate points, and return of spurious points for certain queries.A multi-resolution variant of HEALPix specifically optimized for sparse point data was developed. The new data structure allows different areas of the sphere to be subdivided at different levels of resolution. It combines HEALPix positive features with the advantages of multi-resolution, including reduced memory requirements and improved query performance.An implementation of the new Multi-Resolution HEALPix (MRH data structure was tested using spherically mapped data from four different scientific applications (warhead fragmentation trajectories, weather station locations, galaxy locations, and synthetic locations. Four types of range queries were applied to each data structure for each dataset. Compared to HEALPix, MRH used two to four orders of magnitude less memory for the same data, and on average its queries executed 72% faster. Keywords: Computer science

  9. Sub-micron resolution selected area electron channeling patterns.

    Science.gov (United States)

    Guyon, J; Mansour, H; Gey, N; Crimp, M A; Chalal, S; Maloufi, N

    2015-02-01

    Collection of selected area channeling patterns (SACPs) on a high resolution FEG-SEM is essential to carry out quantitative electron channeling contrast imaging (ECCI) studies, as it facilitates accurate determination of the crystal plane normal with respect to the incident beam direction and thus allows control the electron channeling conditions. Unfortunately commercial SACP modes developed in the past were limited in spatial resolution and are often no longer offered. In this contribution we present a novel approach for collecting high resolution SACPs (HR-SACPs) developed on a Gemini column. This HR-SACP technique combines the first demonstrated sub-micron spatial resolution with high angular accuracy of about 0.1°, at a convenient working distance of 10mm. This innovative approach integrates the use of aperture alignment coils to rock the beam with a digitally calibrated beam shift procedure to ensure the rocking beam is maintained on a point of interest. Moreover a new methodology to accurately measure SACP spatial resolution is proposed. While column considerations limit the rocking angle to 4°, this range is adequate to index the HR-SACP in conjunction with the pattern simulated from the approximate orientation deduced by EBSD. This new technique facilitates Accurate ECCI (A-ECCI) studies from very fine grained and/or highly strained materials. It offers also new insights for developing HR-SACP modes on new generation high-resolution electron columns. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Flow-Signature Analysis of Water Consumption in Nonresidential Building Water Networks Using High-Resolution and Medium-Resolution Smart Meter Data: Two Case Studies

    Science.gov (United States)

    Clifford, Eoghan; Mulligan, Sean; Comer, Joanne; Hannon, Louise

    2018-01-01

    Real-time monitoring of water consumption activities can be an effective mechanism to achieve efficient water network management. This approach, largely enabled by the advent of smart metering technologies, is gradually being practiced in domestic and industrial contexts. In particular, identifying water consumption habits from flow-signatures, i.e., the specific end-usage patterns, is being investigated as a means for conservation in both the residential and nonresidential context. However, the quality of meter data is bivariate (dependent on number of meters and data temporal resolution) and as a result, planning a smart metering scheme is relatively difficult with no generic design approach available. In this study, a comprehensive medium-resolution to high-resolution smart metering program was implemented at two nonresidential trial sites to evaluate the effect of spatial and temporal data aggregation. It was found that medium-resolution water meter data were capable of exposing regular, continuous, peak use, and diurnal patterns which reflect group wide end-usage characteristics. The high-resolution meter data permitted flow-signature at a personal end-use level. Through this unique opportunity to observe water usage characteristics via flow-signature patterns, newly defined hydraulic-based design coefficients determined from Poisson rectangular pulse were developed to intuitively aid in the process of pattern discovery with implications for automated activity recognition applications. A smart meter classification and siting index was introduced which categorizes meter resolution in terms of their suitable application.

  11. On the number of elementary particles in a resolution dependent fractal spacetime

    International Nuclear Information System (INIS)

    He Jihuan

    2007-01-01

    We reconsider the fundamental question regarding the number of elementary particles in a minimally extended standard model. The main conclusion is that since the dimension of E-infinity spacetime is resolution dependent, then the number of elementary particles is also resolution dependent. For D = 10 of superstrings, D = 11 of M theory and D = 12 of F theory one finds N(SM) equal to (6)(10) = 60 (6)(11) = 66 and (6)(12) = 72 particles, respectively. This is in perfect agreement with prediction made previously by Mohamed Saladin El-Naschie and Marek-Crnjac

  12. Laser radar cross-section estimation from high-resolution image data.

    Science.gov (United States)

    Osche, G R; Seeber, K N; Lok, Y F; Young, D S

    1992-05-10

    A methodology for the estimation of ladar cross sections from high-resolution image data of geometrically complex targets is presented. Coherent CO(2) laser radar was used to generate high-resolution amplitude imagery of a UC-8 Buffalo test aircraft at a range of 1.3 km at nine different aspect angles. The average target ladar cross section was synthesized from these data and calculated to be sigma(T) = 15.4 dBsm, which is similar to the expected microwave radar cross sections. The aspect angle dependence of the cross section shows pronounced peaks at nose on and broadside, which are also in agreement with radar results. Strong variations in both the mean amplitude and the statistical distributions of amplitude with the aspect angle have also been observed. The relative mix of diffuse and specular returns causes significant deviations from a simple Lambertian or Swerling II target, especially at broadside where large normal surfaces are present.

  13. Expedient data mining for nontargeted high-resolution LC-MS profiles of biological samples.

    Science.gov (United States)

    Hnatyshyn, Serhiy; Shipkova, Petia; Sanders, Mark

    2013-05-01

    The application of high-resolution LC-MS metabolomics for drug candidate toxicity screening reflects phenotypic changes of an organism caused by induced chemical interferences. Its success depends not only on the ability to translate the acquired analytical information into biological knowledge, but also on the timely delivery of the results to aid the decision making process in drug discovery and development. Recent improvements in analytical instrumentation have resulted in the ability to acquire extremely information-rich datasets. These new data collection abilities have shifted the bottleneck in the timeline of metabolomic studies to the data analysis step. This paper describes our approach to expedient data analysis of nontargeted high-resolution LC-MS profiles of biological samples. The workflow is illustrated with the example of metabolomics study of time-dependent fasting in male rats. The results from measurement of 220 endogenous metabolites in urine samples illustrate significant biochemical changes induced by fasting. The developed software enables the reporting of relative quantities of annotated components while maintaining practical turnaround times. Each component annotation in the report is validated using both calculated isotopic peaks patterns and experimentally determined retention time data on standards.

  14. High resolution mapping of urban areas using SPOT-5 images and ancillary data

    Directory of Open Access Journals (Sweden)

    Elif Sertel

    2015-08-01

    Full Text Available This research aims to propose new rule sets to be used for object based classification of SPOT-5 images to accurately create detailed urban land cover/use maps. In addition to SPOT-5 satellite images, Normalized Difference Vegetation Index (NDVI and Normalized Difference Water Index (NDWI maps, cadastral maps, Openstreet maps, road maps and Land Cover maps, were also integrated into classification to increase the accuracy of resulting maps. Gaziantep city, one of the highly populated cities of Turkey with different landscape patterns was selected as the study area. Different rule sets involving spectral, spatial and geometric characteristics were developed to be used for object based classification of 2.5 m resolution Spot-5 satellite images to automatically create urban map of the region. Twenty different land cover/use classes obtained from European Urban Atlas project were applied and an automatic classification approach was suggested for high resolution urban map creation and updating. Integration of different types of data into the classification decision tree increased the performance and accuracy of the suggested approach. The accuracy assessment results illustrated that with the usage of newly proposed rule set algorithms in object-based classification, urban areas represented with seventeen different sub-classes could be mapped with 94 % or higher overall accuracy.

  15. Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback

    Science.gov (United States)

    Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai

    2012-01-01

    With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.

  16. On a selection method of imaging condition in scintigraphy

    International Nuclear Information System (INIS)

    Ikeda, Hozumi; Kishimoto, Kenji; Shimonishi, Yoshihiro; Ohmura, Masahiro; Kosakai, Kazuhisa; Ochi, Hironobu

    1992-01-01

    Selection of imaging condition in scintigraphy was evaluated using analytic hierarchy process. First, a method of the selection was led by determining at the points of image quantity and imaging time. Influence of image quality was thought to depend on changes of system resolution, count density, image size, and image density. Also influence of imaging time was thought to depend on changes of system sensitivity and data acquisition time. Phantom study was done for paired comparison of these selection factors, and relations of sample data and the factors, that is Rollo phantom images were taken by changing count density, image size, and image density. Image quality was shown by calculating the score of visual evaluation that done by comparing of a pair of images in clearer cold lesion on the scintigrams. Imaging time was shown by relative values for changes of count density. However, system resolution and system sensitivity were constant in this study. Next, using these values analytic hierarchy process was adapted for this selection of imaging conditions. We conclude that this selection of imaging conditions can be analyzed quantitatively using analytic hierarchy process and this analysis develops theoretical consideration of imaging technique. (author)

  17. Electron dose dependence of signal-to-noise ratio, atom contrast and resolution in transmission electron microscope images

    International Nuclear Information System (INIS)

    Lee, Z.; Rose, H.; Lehtinen, O.; Biskupek, J.; Kaiser, U.

    2014-01-01

    In order to achieve the highest resolution in aberration-corrected (AC) high-resolution transmission electron microscopy (HRTEM) images, high electron doses are required which only a few samples can withstand. In this paper we perform dose-dependent AC-HRTEM image calculations, and study the dependence of the signal-to-noise ratio, atom contrast and resolution on electron dose and sampling. We introduce dose-dependent contrast, which can be used to evaluate the visibility of objects under different dose conditions. Based on our calculations, we determine optimum samplings for high and low electron dose imaging conditions. - Highlights: • The definition of dose-dependent atom contrast is introduced. • The dependence of the signal-to-noise ratio, atom contrast and specimen resolution on electron dose and sampling is explored. • The optimum sampling can be determined according to different dose conditions

  18. Movement reveals scale dependence in habitat selection of a large ungulate

    Science.gov (United States)

    Northrup, Joseph; Anderson, Charles R.; Hooten, Mevin B.; Wittemyer, George

    2016-01-01

    Ecological processes operate across temporal and spatial scales. Anthropogenic disturbances impact these processes, but examinations of scale dependence in impacts are infrequent. Such examinations can provide important insight to wildlife–human interactions and guide management efforts to reduce impacts. We assessed spatiotemporal scale dependence in habitat selection of mule deer (Odocoileus hemionus) in the Piceance Basin of Colorado, USA, an area of ongoing natural gas development. We employed a newly developed animal movement method to assess habitat selection across scales defined using animal-centric spatiotemporal definitions ranging from the local (defined from five hour movements) to the broad (defined from weekly movements). We extended our analysis to examine variation in scale dependence between night and day and assess functional responses in habitat selection patterns relative to the density of anthropogenic features. Mule deer displayed scale invariance in the direction of their response to energy development features, avoiding well pads and the areas closest to roads at all scales, though with increasing strength of avoidance at coarser scales. Deer displayed scale-dependent responses to most other habitat features, including land cover type and habitat edges. Selection differed between night and day at the finest scales, but homogenized as scale increased. Deer displayed functional responses to development, with deer inhabiting the least developed ranges more strongly avoiding development relative to those with more development in their ranges. Energy development was a primary driver of habitat selection patterns in mule deer, structuring their behaviors across all scales examined. Stronger avoidance at coarser scales suggests that deer behaviorally mediated their interaction with development, but only to a degree. At higher development densities than seen in this area, such mediation may not be possible and thus maintenance of sufficient

  19. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    Science.gov (United States)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  20. Evaluation of ALOS PALSAR Data for High-Resolution Mapping of Vegetated Wetlands in Alaska

    Directory of Open Access Journals (Sweden)

    Daniel Clewley

    2015-06-01

    Full Text Available As the largest natural source of methane, wetlands play an important role in the carbon cycle. High-resolution maps of wetland type and extent are required to quantify wetland responses to climate change. Mapping northern wetlands is particularly important because of a disproportionate increase in temperatures at higher latitudes. Synthetic aperture radar data from a spaceborne platform can be used to map wetland types and dynamics over large areas. Following from earlier work by Whitcomb et al. (2009 using Japanese Earth Resources Satellite (JERS-1 data, we applied the “random forests” classification algorithm to variables from L-band ALOS PALSAR data for 2007, topographic data (e.g., slope, elevation and locational information (latitude, longitude to derive a map of vegetated wetlands in Alaska, with a spatial resolution of 50 m. We used the National Wetlands Inventory and National Land Cover Database (for upland areas to select training and validation data and further validated classification results with an independent dataset that we created. A number of improvements were made to the method of Whitcomb et al. (2009: (1 more consistent training data in upland areas; (2 better distribution of training data across all classes by taking a stratified random sample of all available training pixels; and (3 a more efficient implementation, which allowed classification of the entire state as a single entity (rather than in separate tiles, which eliminated discontinuities at tile boundaries. The overall accuracy for discriminating wetland from upland was 95%, and the accuracy at the level of wetland classes was 85%. The total area of wetlands mapped was 0.59 million km2, or 36% of the total land area of the state of Alaska. The map will be made available to download from NASA’s wetland monitoring website.

  1. A study of pH-dependent photodegradation of amiloride by a multivariate curve resolution approach to combined kinetic and acid-base titration UV data.

    Science.gov (United States)

    De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano

    2012-11-21

    Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium.

  2. Estimation of improved resolution soil moisture in vegetated areas using passive AMSR-E data

    Science.gov (United States)

    Moradizadeh, Mina; Saradjian, Mohammad R.

    2018-03-01

    Microwave remote sensing provides a unique capability for soil parameter retrievals. Therefore, various soil parameters estimation models have been developed using brightness temperature (BT) measured by passive microwave sensors. Due to the low resolution of satellite microwave radiometer data, the main goal of this study is to develop a downscaling approach to improve the spatial resolution of soil moisture estimates with the use of higher resolution visible/infrared sensor data. Accordingly, after the soil parameters have been obtained using Simultaneous Land Parameters Retrieval Model algorithm, the downscaling method has been applied to the soil moisture estimations that have been validated against in situ soil moisture data. Advance Microwave Scanning Radiometer-EOS BT data in Soil Moisture Experiment 2003 region in the south and north of Oklahoma have been used to this end. Results illustrated that the soil moisture variability is effectively captured at 5 km spatial scales without a significant degradation of the accuracy.

  3. Effects of the spatial resolution of urban drainage data on nonpoint source pollution prediction.

    Science.gov (United States)

    Dai, Ying; Chen, Lei; Hou, Xiaoshu; Shen, Zhenyao

    2018-03-14

    Detailed urban drainage data are important for urban nonpoint source (NPS) pollution prediction. However, the difficulties in collecting complete pipeline data usually interfere with urban NPS pollution studies, especially in large-scale study areas. In this study, NPS pollution models were constructed for a typical urban catchment using the SWMM, based on five drainage datasets with different resolution levels. The influence of the data resolution on the simulation results was examined. The calibration and validation results of the higher-resolution (HR) model indicated a satisfactory model performance with relatively detailed drainage data. However, the performances of the parameter-regionalized lower-resolution (LR) models were still affected by the drainage data scale. This scale effect was due not only to the pipe routing process but also to changes in the effective impervious area, which could be limited by a scale threshold. The runoff flow and NPS pollution responded differently to changes in scale, primarily because of the difference between buildup and washoff and the more significant decrease in pollutant infiltration loss and the much greater increase of pollutant flooding loss while scaling up. Additionally, scale effects were also affected by the rainfall type. Sub-area routing between impervious and pervious areas could improve the LR model performances to an extent, and this approach is recommended to offset the influence of spatial resolution deterioration.

  4. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  5. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  6. High-Resolution Climate Data Visualization through GIS- and Web-based Data Portals

    Science.gov (United States)

    WANG, X.; Huang, G.

    2017-12-01

    Sound decisions on climate change adaptation rely on an in-depth assessment of potential climate change impacts at regional and local scales, which usually requires finer resolution climate projections at both spatial and temporal scales. However, effective downscaling of global climate projections is practically difficult due to the lack of computational resources and/or long-term reference data. Although a large volume of downscaled climate data has been make available to the public, how to understand and interpret the large-volume climate data and how to make use of the data to drive impact assessment and adaptation studies are still challenging for both impact researchers and decision makers. Such difficulties have become major barriers preventing informed climate change adaptation planning at regional scales. Therefore, this research will explore new GIS- and web-based technologies to help visualize the large-volume regional climate data with high spatiotemporal resolutions. A user-friendly public data portal, named Climate Change Data Portal (CCDP, http://ccdp.network), will be established to allow intuitive and open access to high-resolution regional climate projections at local scales. The CCDP offers functions of visual representation through geospatial maps and data downloading for a variety of climate variables (e.g., temperature, precipitation, relative humidity, solar radiation, and wind) at multiple spatial resolutions (i.e., 25 - 50 km) and temporal resolutions (i.e., annual, seasonal, monthly, daily, and hourly). The vast amount of information the CCDP encompasses can provide a crucial basis for assessing impacts of climate change on local communities and ecosystems and for supporting better decision making under a changing climate.

  7. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    Science.gov (United States)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional

  8. Positive Selection on Loci Associated with Drug and Alcohol Dependence.

    Directory of Open Access Journals (Sweden)

    Brooke Sadler

    Full Text Available Much of the evolution of human behavior remains a mystery, including how certain disadvantageous behaviors are so prevalent. Nicotine addiction is one such phenotype. Several loci have been implicated in nicotine related phenotypes including the nicotinic receptor gene clusters (CHRNs on chromosomes 8 and 15. Here we use 1000 Genomes sequence data from 3 populations (Africans, Asians and Europeans to examine whether natural selection has occurred at these loci. We used Tajima's D and the integrated haplotype score (iHS to test for evidence of natural selection. Our results provide evidence for strong selection in the nicotinic receptor gene cluster on chromosome 8, previously found to be significantly associated with both nicotine and cocaine dependence, as well as evidence selection acting on the region containing the CHRNA5 nicotinic receptor gene on chromosome 15, that is genome wide significant for risk for nicotine dependence. To examine the possibility that this selection is related to memory and learning, we utilized genetic data from the Collaborative Studies on the Genetics of Alcoholism (COGA to test variants within these regions with three tests of memory and learning, the Wechsler Adult Intelligence Scale (WAIS Block Design, WAIS Digit Symbol and WAIS Information tests. Of the 17 SNPs genotyped in COGA in this region, we find one significantly associated with WAIS digit symbol test results. This test captures aspects of reaction time and memory, suggesting that a phenotype relating to memory and learning may have been the driving force behind selection at these loci. This study could begin to explain why these seemingly deleterious SNPs are present at their current frequencies.

  9. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    Science.gov (United States)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  10. Towards breaking the spatial resolution barriers: An optical flow and super-resolution approach for sea ice motion estimation

    Science.gov (United States)

    Petrou, Zisis I.; Xian, Yang; Tian, YingLi

    2018-04-01

    Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.

  11. A global reference database from very high resolution commercial satellite data and methodology for application to Landsat derived 30 m continuous field tree cover data

    Science.gov (United States)

    Pengra, Bruce; Long, Jordan; Dahal, Devendra; Stehman, Stephen V.; Loveland, Thomas R.

    2015-01-01

    The methodology for selection, creation, and application of a global remote sensing validation dataset using high resolution commercial satellite data is presented. High resolution data are obtained for a stratified random sample of 500 primary sampling units (5 km  ×  5 km sample blocks), where the stratification based on Köppen climate classes is used to distribute the sample globally among biomes. The high resolution data are classified to categorical land cover maps using an analyst mediated classification workflow. Our initial application of these data is to evaluate a global 30 m Landsat-derived, continuous field tree cover product. For this application, the categorical reference classification produced at 2 m resolution is converted to percent tree cover per 30 m pixel (secondary sampling unit)for comparison to Landsat-derived estimates of tree cover. We provide example results (based on a subsample of 25 sample blocks in South America) illustrating basic analyses of agreement that can be produced from these reference data. Commercial high resolution data availability and data quality are shown to provide a viable means of validating continuous field tree cover. When completed, the reference classifications for the full sample of 500 blocks will be released for public use.

  12. Fine grained nuclear emulsion for higher resolution tracking detector

    Energy Technology Data Exchange (ETDEWEB)

    Naka, T., E-mail: naka@flab.phys.nagoya-u.ac.jp [Institute of Advanced Research, Nagoya University, Nagoya (Japan); Asada, T.; Katsuragawa, T.; Hakamata, K.; Yoshimoto, M.; Kuwabara, K.; Nakamura, M.; Sato, O.; Nakano, T. [Graduated School of Science, Nagoya University, Nagoya (Japan); Tawara, Y. [Division of Energy Science, EcoTopia Science Institute, Nagoya University, Nagoya (Japan); De Lellis, G. [INFN Sezione di Napoli, Napoli (Italy); Sirignano, C. [INFN Sezione di Padova, Padova (Italy); D' Ambrossio, N. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (L' Aquila) (Italy)

    2013-08-01

    Fine grained nuclear emulsion with several 10 nm silver halide crystals can detect submicron tracks. This detector is expected to be worked as dark matter detector with directional sensitive. Now, nuclear emulsion became possible to be produced at Nagoya University, and extreme fine grained nuclear emulsion with 20 nm diameter was produced. Using this emulsion and new reading out technique with expansion technique, for optical selection and X-ray microscopy, recoiled tracks induced by dark matter can be detected automatically. Then, readout efficiency is larger than 80% at 120 nm, and angular resolution for final confirmation with X-ray microscopy is 20°. In addition, we started to construct the R and D underground facility in Gran Sasso.

  13. First high-statistics and high-resolution recoil-ion data from the WITCH retardation spectrometer

    Science.gov (United States)

    Finlay, P.; Breitenfeldt, M.; Porobić, T.; Wursten, E.; Ban, G.; Beck, M.; Couratin, C.; Fabian, X.; Fléchard, X.; Friedag, P.; Glück, F.; Herlert, A.; Knecht, A.; Kozlov, V. Y.; Liénard, E.; Soti, G.; Tandecki, M.; Traykov, E.; Van Gorp, S.; Weinheimer, Ch.; Zákoucký, D.; Severijns, N.

    2016-07-01

    The first high-statistics and high-resolution data set for the integrated recoil-ion energy spectrum following the β^+ decay of 35Ar has been collected with the WITCH retardation spectrometer located at CERN-ISOLDE. Over 25 million recoil-ion events were recorded on a large-area multichannel plate (MCP) detector with a time-stamp precision of 2ns and position resolution of 0.1mm due to the newly upgraded data acquisition based on the LPC Caen FASTER protocol. The number of recoil ions was measured for more than 15 different settings of the retardation potential, complemented by dedicated background and half-life measurements. Previously unidentified systematic effects, including an energy-dependent efficiency of the main MCP and a radiation-induced time-dependent background, have been identified and incorporated into the analysis. However, further understanding and treatment of the radiation-induced background requires additional dedicated measurements and remains the current limiting factor in extracting a beta-neutrino angular correlation coefficient for 35Ar decay using the WITCH spectrometer.

  14. Influence of Elevation Data Resolution on Spatial Prediction of Colluvial Soils in a Luvisol Region

    Science.gov (United States)

    Penížek, Vít; Zádorová, Tereza; Kodešová, Radka; Vaněk, Aleš

    2016-01-01

    The development of a soil cover is a dynamic process. Soil cover can be altered within a few decades, which requires updating of the legacy soil maps. Soil erosion is one of the most important processes quickly altering soil cover on agriculture land. Colluvial soils develop in concave parts of the landscape as a consequence of sedimentation of eroded material. Colluvial soils are recognised as important soil units because they are a vast sink of soil organic carbon. Terrain derivatives became an important tool in digital soil mapping and are among the most popular auxiliary data used for quantitative spatial prediction. Prediction success rates are often directly dependent on raster resolution. In our study, we tested how raster resolution (1, 2, 3, 5, 10, 20 and 30 meters) influences spatial prediction of colluvial soils. Terrain derivatives (altitude, slope, plane curvature, topographic position index, LS factor and convergence index) were calculated for the given raster resolutions. Four models were applied (boosted tree, neural network, random forest and Classification/Regression Tree) to spatially predict the soil cover over a 77 ha large study plot. Models training and validation was based on 111 soil profiles surveyed on a regular sampling grid. Moreover, the predicted real extent and shape of the colluvial soil area was examined. In general, no clear trend in the accuracy prediction was found without the given raster resolution range. Higher maximum prediction accuracy for colluvial soil, compared to prediction accuracy of total soil cover of the study plot, can be explained by the choice of terrain derivatives that were best for Colluvial soils differentiation from other soil units. Regarding the character of the predicted Colluvial soils area, maps of 2 to 10 m resolution provided reasonable delineation of the colluvial soil as part of the cover over the study area. PMID:27846230

  15. Influence of Elevation Data Resolution on Spatial Prediction of Colluvial Soils in a Luvisol Region.

    Directory of Open Access Journals (Sweden)

    Vít Penížek

    Full Text Available The development of a soil cover is a dynamic process. Soil cover can be altered within a few decades, which requires updating of the legacy soil maps. Soil erosion is one of the most important processes quickly altering soil cover on agriculture land. Colluvial soils develop in concave parts of the landscape as a consequence of sedimentation of eroded material. Colluvial soils are recognised as important soil units because they are a vast sink of soil organic carbon. Terrain derivatives became an important tool in digital soil mapping and are among the most popular auxiliary data used for quantitative spatial prediction. Prediction success rates are often directly dependent on raster resolution. In our study, we tested how raster resolution (1, 2, 3, 5, 10, 20 and 30 meters influences spatial prediction of colluvial soils. Terrain derivatives (altitude, slope, plane curvature, topographic position index, LS factor and convergence index were calculated for the given raster resolutions. Four models were applied (boosted tree, neural network, random forest and Classification/Regression Tree to spatially predict the soil cover over a 77 ha large study plot. Models training and validation was based on 111 soil profiles surveyed on a regular sampling grid. Moreover, the predicted real extent and shape of the colluvial soil area was examined. In general, no clear trend in the accuracy prediction was found without the given raster resolution range. Higher maximum prediction accuracy for colluvial soil, compared to prediction accuracy of total soil cover of the study plot, can be explained by the choice of terrain derivatives that were best for Colluvial soils differentiation from other soil units. Regarding the character of the predicted Colluvial soils area, maps of 2 to 10 m resolution provided reasonable delineation of the colluvial soil as part of the cover over the study area.

  16. Persistent Data Layout and Infrastructure for Efficient Selective Retrieval of Event Data in ATLAS

    CERN Document Server

    INSPIRE-00084279; Malon, David

    2011-01-01

    The ATLAS detector at CERN has completed its first full year of recording collisions at 7 TeV, resulting in billions of events and petabytes of data. At these scales, physicists must have the capability to read only the data of interest to their analyses, with the importance of efficient selective access increasing as data taking continues. ATLAS has developed a sophisticated event-level metadata infrastructure and supporting I/O framework allowing event selections by explicit specification, by back navigation, and by selection queries to a TAG database via an integrated web interface. These systems and their performance have been reported on elsewhere. The ultimate success of such a system, however, depends significantly upon the efficiency of selective event retrieval. Supporting such retrieval can be challenging, as ATLAS stores its event data in column-wise orientation using ROOT trees for a number of reasons, including compression considerations, histogramming use cases, and more. For 2011 data, ATLAS wi...

  17. Selective maintenance of multi-state systems with structural dependence

    International Nuclear Information System (INIS)

    Dao, Cuong D.; Zuo, Ming J.

    2017-01-01

    This paper studies the selective maintenance problem for multi-state systems with structural dependence. Each component can be in one of multiple working levels and several maintenance actions are possible to a component in a maintenance break. The components structurally form multiple hierarchical levels and dependence groups. A directed graph is used to represent the precedence relations of components in the system. A selective maintenance optimization model is developed to maximize the system reliability in the next mission under time and cost constraints. A backward search algorithm is used to determine the assembly sequence for a selective maintenance scenario. The maintenance model helps maintenance managers in determining the best combination of maintenance activities to maximize the probability of successfully completing the next mission. Examples showing the use of the proposed method are presented. - Highlights: • A selective maintenance model for multi-state systems is proposed considering both economic and structural dependence. • Structural dependence is modeled as precedence relationship when disassembling components for maintenance. • Resources for disassembly and maintenance are evaluated using a backward search algorithm. • Maintenance strategies with and without structural dependence are analyzed. • Ignoring structural dependence may lead to over-estimation of system reliability.

  18. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    Science.gov (United States)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  19. Efficient Use of Historical Data for Genomic Selection: A Case Study of Stem Rust Resistance in Wheat

    Directory of Open Access Journals (Sweden)

    J. Rutkoski

    2015-03-01

    Full Text Available Genomic selection (GS is a methodology that can improve crop breeding efficiency. To implement GS, a training population (TP with phenotypic and genotypic data is required to train a statistical model used to predict genotyped selection candidates (SCs. A key factor impacting prediction accuracy is the relationship between the TP and the SCs. This study used empirical data for quantitative adult plant resistance to stem rust of wheat ( L. to investigate the utility of a historical TP (TP compared with a population-specific TP (TP, the potential for TP optimization, and the utility of TP data when close relative data is available for training. We found that, depending on the population size, a TP was 1.5 to 4.4 times more accurate than a TP, and TP optimization based on the mean of the generalized coefficient of determination or prediction error variance enabled the selection of subsets that led to significantly higher accuracy than randomly selected subsets. Retaining historical data when data on close relatives were available lead to a 11.9% increase in accuracy, at best, and a 12% decrease in accuracy, at worst, depending on the heritability. We conclude that historical data could be used successfully to initiate a GS program, especially if the dataset is very large and of high heritability. Training population optimization would be useful for the identification of TP subsets to phenotype additional traits. However, after model updating, discarding historical data may be warranted. More studies are needed to determine if these observations represent general trends.

  20. LENS MODELS OF HERSCHEL-SELECTED GALAXIES FROM HIGH-RESOLUTION NEAR-IR OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Calanog, J. A.; Cooray, A.; Ma, B.; Casey, C. M. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Fu, Hai [Department of Physics and Astronomy, University of Iowa, Van Allen Hall, Iowa City, IA 52242 (United States); Wardlow, J. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Amber, S. [Department of Physical Sciences, The Open University, Milton Keynes MK7 6AA (United Kingdom); Baker, A. J. [Department of Physics and Astronomy, Rutgers, The State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Baes, M. [1 Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281, B-9000 Gent (Belgium); Bock, J. [California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); Bourne, N.; Dye, S. [School of Physics and Astronomy, University of Nottingham, NG7 2RD (United Kingdom); Bussmann, R. S. [Department of Astronomy, Space Science Building, Cornell University, Ithaca, NY 14853-6801 (United States); Chapman, S. C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Clements, D. L. [Astrophysics Group, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom); Conley, A. [Center for Astrophysics and Space Astronomy 389-UCB, University of Colorado, Boulder, CO 80309 (United States); Dannerbauer, H. [Laboratoire AIM-Paris-Saclay, CEA/DSM/Irfu-CNRS-Université Paris Diderot, CE-Saclay, pt courrier 131, F-91191 Gif-sur-Yvette (France); De Zotti, G. [INAF-Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); Dunne, L.; Eales, S. [School of Physics and Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff CF24 3AA (United Kingdom); and others

    2014-12-20

    We present Keck-Adaptive Optics and Hubble Space Telescope high resolution near-infrared (IR) imaging for 500 μm bright candidate lensing systems identified by the Herschel Multi-tiered Extragalactic Survey and Herschel Astrophysical Terahertz Large Area Survey. Out of 87 candidates with near-IR imaging, 15 (∼17%) display clear near-IR lensing morphologies. We present near-IR lens models to reconstruct and recover basic rest-frame optical morphological properties of the background galaxies from 12 new systems. Sources with the largest near-IR magnification factors also tend to be the most compact, consistent with the size bias predicted from simulations and previous lensing models for submillimeter galaxies (SMGs). For four new sources that also have high-resolution submillimeter maps, we test for differential lensing between the stellar and dust components and find that the 880 μm magnification factor (μ{sub 880}) is ∼1.5 times higher than the near-IR magnification factor (μ{sub NIR}), on average. We also find that the stellar emission is ∼2 times more extended in size than dust. The rest-frame optical properties of our sample of Herschel-selected lensed SMGs are consistent with those of unlensed SMGs, which suggests that the two populations are similar.

  1. Science with High Spatial Resolution Far-Infrared Data

    Science.gov (United States)

    Terebey, Susan (Editor); Mazzarella, Joseph M. (Editor)

    1994-01-01

    The goal of this workshop was to discuss new science and techniques relevant to high spatial resolution processing of far-infrared data, with particular focus on high resolution processing of IRAS data. Users of the maximum correlation method, maximum entropy, and other resolution enhancement algorithms applicable to far-infrared data gathered at the Infrared Processing and Analysis Center (IPAC) for two days in June 1993 to compare techniques and discuss new results. During a special session on the third day, interested astronomers were introduced to IRAS HIRES processing, which is IPAC's implementation of the maximum correlation method to the IRAS data. Topics discussed during the workshop included: (1) image reconstruction; (2) random noise; (3) imagery; (4) interacting galaxies; (5) spiral galaxies; (6) galactic dust and elliptical galaxies; (7) star formation in Seyfert galaxies; (8) wavelet analysis; and (9) supernova remnants.

  2. Downscaling reanalysis data to high-resolution variables above a glacier surface (Cordillera Blanca, Peru)

    Science.gov (United States)

    Hofer, Marlis; Mölg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-05-01

    Recently initiated observation networks in the Cordillera Blanca provide temporally high-resolution, yet short-term atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly NCEP/NCAR reanalysis data to the local target variables, measured at the tropical glacier Artesonraju (Northern Cordillera Blanca). The approach is particular in the context of ESD for two reasons. First, the observational time series for model calibration are short (only about two years). Second, unlike most ESD studies in climate research, we focus on variables at a high temporal resolution (i.e., six-hourly values). Our target variables are two important drivers in the surface energy balance of tropical glaciers; air temperature and specific humidity. The selection of predictor fields from the reanalysis data is based on regression analyses and climatologic considerations. The ESD modelling procedure includes combined empirical orthogonal function and multiple regression analyses. Principal component screening is based on cross-validation using the Akaike Information Criterion as model selection criterion. Double cross-validation is applied for model evaluation. Potential autocorrelation in the time series is considered by defining the block length in the resampling procedure. Apart from the selection of predictor fields, the modelling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice by using both single- and mixed-field predictors of the variables air temperature (1000 hPa), specific humidity (1000 hPa), and zonal wind speed (500 hPa). The chosen downscaling domain ranges from 80 to 50 degrees west and from 0 to 20 degrees south. Statistical transfer functions are derived individually for different months and times of day (month/hour-models). The forecast skill of the month/hour-models largely depends on

  3. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    Directory of Open Access Journals (Sweden)

    Gianni Cristian Iannelli

    2014-09-01

    Full Text Available Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR data prove the usefulness and flexibility of the framework.

  4. Intercohort density dependence drives brown trout habitat selection

    Science.gov (United States)

    Ayllón, Daniel; Nicola, Graciela G.; Parra, Irene; Elvira, Benigno; Almodóvar, Ana

    2013-01-01

    Habitat selection can be viewed as an emergent property of the quality and availability of habitat but also of the number of individuals and the way they compete for its use. Consequently, habitat selection can change across years due to fluctuating resources or to changes in population numbers. However, habitat selection predictive models often do not account for ecological dynamics, especially density dependent processes. In stage-structured population, the strength of density dependent interactions between individuals of different age classes can exert a profound influence on population trajectories and evolutionary processes. In this study, we aimed to assess the effects of fluctuating densities of both older and younger competing life stages on the habitat selection patterns (described as univariate and multivariate resource selection functions) of young-of-the-year, juvenile and adult brown trout Salmo trutta. We observed all age classes were selective in habitat choice but changed their selection patterns across years consistently with variations in the densities of older but not of younger age classes. Trout of an age increased selectivity for positions highly selected by older individuals when their density decreased, but this pattern did not hold when the density of younger age classes varied. It suggests that younger individuals are dominated by older ones but can expand their range of selected habitats when density of competitors decreases, while older trout do not seem to consider the density of younger individuals when distributing themselves even though they can negatively affect their final performance. Since these results may entail critical implications for conservation and management practices based on habitat selection models, further research should involve a wider range of river typologies and/or longer time frames to fully understand the patterns of and the mechanisms underlying the operation of density dependence on brown trout habitat

  5. Constant resolution of time-dependent Hartree--Fock phase ambiguity

    International Nuclear Information System (INIS)

    Lichtner, P.C.; Griffin, J.J.; Schultheis, H.; Schultheis, R.; Volkov, A.B.

    1978-01-01

    The customary time-dependent Hartree--Fock problem is shown to be ambiguous up to an arbitrary function of time additive to H/sub HF/, and, consequently, up to an arbitrary time-dependent phase for the solution, PHI(t). The ''constant'' (H)'' phase is proposed as the best resolution of this ambiguity. It leads to the following attractive features: (a) the time-dependent Hartree--Fock (TDHF) Hamiltonian, H/sub HF/, becomes a quantity whose expectation value is equal to the average energy and, hence, constant in time; (b) eigenstates described exactly by determinants, have time-dependent Hartree--Fock solutions identical with the exact time-dependent solutions; (c) among all possible TDHF solutions this choice minimizes the norm of the quantity (H--i dirac constant delta/delta t) operating on the ket PHI, and guarantees optimal time evolution over an infinitesimal period; (d) this choice corresponds both to the stationary value of the absolute difference between (H) and (i dirac constant delta/delta t) and simultaneously to its absolute minimal value with respect to choice of the time-dependent phase. The source of the ambiguity is discussed. It lies in the time-dependent generalization of the freedom to transform unitarily among the single-particle states of a determinant at the (physically irrelevant for stationary states) cost of altering only a factor of unit magnitude

  6. Entity resolution for uncertain data

    NARCIS (Netherlands)

    Ayat, N.; Akbarinia, R.; Afsarmanesh, H.; Valduriez, P.

    2012-01-01

    Entity resolution (ER), also known as duplicate detection or record matching, is the problem of identifying the tuples that represent the same real world entity. In this paper, we address the problem of ER for uncertain data, which we call ERUD. We propose two different approaches for the ERUD

  7. Linear mixing model applied to coarse resolution satellite data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  8. Automated data processing of high-resolution mass spectra

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Smedsgaard, Jørn

    of the massive amounts of data. We present an automated data processing method to quantitatively compare large numbers of spectra from the analysis of complex mixtures, exploiting the full quality of high-resolution mass spectra. By projecting all detected ions - within defined intervals on both the time...... infusion of crude extracts into the source taking advantage of the high sensitivity, high mass resolution and accuracy and the limited fragmentation. Unfortunately, there has not been a comparable development in the data processing techniques to fully exploit gain in high resolution and accuracy...... infusion analyses of crude extract to find the relationship between species from several species terverticillate Penicillium, and also that the ions responsible for the segregation can be identified. Furthermore the process can automate the process of detecting unique species and unique metabolites....

  9. Impact of respiratory motion correction and spatial resolution on lesion detection in PET: a simulation study based on real MR dynamic data

    Science.gov (United States)

    Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.

    2014-02-01

    The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future

  10. Impact of respiratory motion correction and spatial resolution on lesion detection in PET: a simulation study based on real MR dynamic data

    International Nuclear Information System (INIS)

    Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P; Marsden, Paul K

    2014-01-01

    The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future

  11. Analysing the Advantages of High Temporal Resolution Geostationary MSG SEVIRI Data Compared to Polar Operational Environmental Satellite Data for Land Surface Monitoring in Africa

    Science.gov (United States)

    Fensholt, R.; Anyamba, A.; Huber, S.; Proud, S. R.; Tucker, C. J.; Small, J.; Pak, E.; Rasmussen, M. O.; Sandholt, I.; Shisanya, C.

    2011-01-01

    Since 1972, satellite remote sensing of the environment has been dominated by polar-orbiting sensors providing useful data for monitoring the earth s natural resources. However their observation and monitoring capacity are inhibited by daily to monthly looks for any given ground surface which often is obscured by frequent and persistent cloud cover creating large gaps in time series measurements. The launch of the Meteosat Second Generation (MSG) satellite into geostationary orbit has opened new opportunities for land surface monitoring. The Spinning Enhanced Visible and Infrared Imager (SEVIRI) instrument on-board MSG with an imaging capability every 15 minutes which is substantially greater than any temporal resolution that can be obtained from existing polar operational environmental satellites (POES) systems currently in use for environmental monitoring. Different areas of the African continent were affected by droughts and floods in 2008 caused by periods of abnormally low and high rainfall, respectively. Based on the effectiveness of monitoring these events from Earth Observation (EO) data the current analyses show that the new generation of geostationary remote sensing data can provide higher temporal resolution cloud-free (less than 5 days) measurements of the environment as compared to existing POES systems. SEVIRI MSG 5-day continental scale composites will enable rapid assessment of environmental conditions and improved early warning of disasters for the African continent such as flooding or droughts. The high temporal resolution geostationary data will complement existing higher spatial resolution polar-orbiting satellite data for various dynamic environmental and natural resource applications of terrestrial ecosystems.

  12. Conditional Selection of Genomic Alterations Dictates Cancer Evolution and Oncogenic Dependencies.

    Science.gov (United States)

    Mina, Marco; Raynaud, Franck; Tavernari, Daniele; Battistello, Elena; Sungalee, Stephanie; Saghafinia, Sadegh; Laessle, Titouan; Sanchez-Vega, Francisco; Schultz, Nikolaus; Oricchio, Elisa; Ciriello, Giovanni

    2017-08-14

    Cancer evolves through the emergence and selection of molecular alterations. Cancer genome profiling has revealed that specific events are more or less likely to be co-selected, suggesting that the selection of one event depends on the others. However, the nature of these evolutionary dependencies and their impact remain unclear. Here, we designed SELECT, an algorithmic approach to systematically identify evolutionary dependencies from alteration patterns. By analyzing 6,456 genomes from multiple tumor types, we constructed a map of oncogenic dependencies associated with cellular pathways, transcriptional readouts, and therapeutic response. Finally, modeling of cancer evolution shows that alteration dependencies emerge only under conditional selection. These results provide a framework for the design of strategies to predict cancer progression and therapeutic response. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Estimating structure quality trends in the Protein Data Bank by equivalent resolution.

    Science.gov (United States)

    Bagaria, Anurag; Jaravine, Victor; Güntert, Peter

    2013-10-01

    The quality of protein structures obtained by different experimental and ab-initio calculation methods varies considerably. The methods have been evolving over time by improving both experimental designs and computational techniques, and since the primary aim of these developments is the procurement of reliable and high-quality data, better techniques resulted on average in an evolution toward higher quality structures in the Protein Data Bank (PDB). Each method leaves a specific quantitative and qualitative "trace" in the PDB entry. Certain information relevant to one method (e.g. dynamics for NMR) may be lacking for another method. Furthermore, some standard measures of quality for one method cannot be calculated for other experimental methods, e.g. crystal resolution or NMR bundle RMSD. Consequently, structures are classified in the PDB by the method used. Here we introduce a method to estimate a measure of equivalent X-ray resolution (e-resolution), expressed in units of Å, to assess the quality of any type of monomeric, single-chain protein structure, irrespective of the experimental structure determination method. We showed and compared the trends in the quality of structures in the Protein Data Bank over the last two decades for five different experimental techniques, excluding theoretical structure predictions. We observed that as new methods are introduced, they undergo a rapid method development evolution: within several years the e-resolution score becomes similar for structures obtained from the five methods and they improve from initially poor performance to acceptable quality, comparable with previously established methods, the performance of which is essentially stable. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The optimization of high resolution topographic data for 1D hydrodynamic models

    International Nuclear Information System (INIS)

    Ales, Ronovsky; Michal, Podhoranyi

    2016-01-01

    The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domain and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.

  15. The optimization of high resolution topographic data for 1D hydrodynamic models

    Science.gov (United States)

    Ales, Ronovsky; Michal, Podhoranyi

    2016-06-01

    The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domain and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.

  16. The optimization of high resolution topographic data for 1D hydrodynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Ales, Ronovsky, E-mail: ales.ronovsky@vsb.cz; Michal, Podhoranyi [IT4Innovations National Supercomputing Center, VŠB-Technical University of Ostrava, Studentská 6231/1B, 708 33 Ostrava (Czech Republic)

    2016-06-08

    The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domain and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.

  17. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  18. A Central European precipitation climatology – Part II: Application of the high-resolution HYRAS data for COSMO-CLM evaluation

    Directory of Open Access Journals (Sweden)

    Susanne Brienen

    2016-05-01

    Full Text Available The horizontal resolution of regional climate model (RCM simulations is increasing constantly in the last years. For the evaluation of these simulations and the further development of the models, adequate observational data sets are required, in particular with respect to the spatial scales. The aim of this paper is to investigate the value of a new high-resolution precipitation climatology, the HYRAS-PRE v.2.0 data set, for the evaluation of RCM output. HYRAS-PRE is available for the time period 1951–2006 at daily resolution and covers ten river catchments in Germany and neighbouring countries at a spatial grid spacing of 5 km. A set of simulations with the regional climate model COSMO-CLM with three different grid spacings (~7$\\sim7$, 14 and 28 km is used for this model evaluation study. In addition, three other data sets with different horizontal resolution are considered in the comparisons: the E‑OBS v.8.0 gridded observations (~25$\\sim25$ km grid spacing, the ERA-Interim reanalysis (~79$\\sim79$ km and the analysis of the driving model GME (~40$\\sim40$–60 km. For three selected years, different spatial and temporal characteristics of daily precipitation are investigated. In all the analyzed precipitation characteristics, it is found that the variability between the data sets is very large. The benefit of an evaluation with HYRAS-PRE compared to coarser-resolved observations becomes visible especially in the representation of the frequency of occurrence distribution of daily precipitation amounts and in the spatial variability of different precipitation indices. A second goal of this study was to estimate the error when comparing a high resolution simulated precipitation field with coarser resolved observations. Comparing the HYRAS-PRE average over an area of 5×5$5\\times5$ grid points with the original HYRAS-PRE data results in a systematic underestimation of high values of all indices considered and an overestimation

  19. Classification of Volcanic Eruptions on Io and Earth Using Low-Resolution Remote Sensing Data

    Science.gov (United States)

    Davies, A. G.; Keszthelyi, L. P.

    2005-01-01

    Two bodies in the Solar System exhibit high-temperature active volcanism: Earth and Io. While there are important differences in the eruptions on Earth and Io, in low-spatial-resolution data (corresponding to the bulk of available and foreseeable data of Io), similar styles of effusive and explosive volcanism yield similar thermal flux densities. For example, a square metre of an active pahoehoe flow on Io looks very similar to a square metre of an active pahoehoe flow on Earth. If, from observed thermal emission as a function of wavelength and change in thermal emission with time, the eruption style of an ionian volcano can be constrained, estimates of volumetric fluxes can be made and compared with terrestrial volcanoes using techniques derived for analysing terrestrial remotely-sensed data. In this way we find that ionian volcanoes fundamentally differ from their terrestrial counterparts only in areal extent, with Io volcanoes covering larger areas, with higher volumetric flux. Io outbursts eruptions have enormous implied volumetric fluxes, and may scale with terrestrial flood basalt eruptions. Even with the low-spatial resolution data available it is possible to sometimes constrain and classify eruption style both on Io and Earth from the integrated thermal emission spectrum. Plotting 2 and 5 m fluxes reveals the evolution of individual eruptions of different styles, as well as the relative intensity of eruptions, allowing comparison to be made from individual eruptions on both planets. Analyses like this can be used for interpretation of low-resolution data until the next mission to the jovian system. For a number of Io volcanoes (including Pele, Prometheus, Amirani, Zamama, Culann, Tohil and Tvashtar) we do have high/moderate resolution imagery to aid determination of eruption mode from analyses based only on low spatial-resolution data.

  20. Interactively variable isotropic resolution in computed tomography

    International Nuclear Information System (INIS)

    Lapp, Robert M; Kyriakou, Yiannis; Kachelriess, Marc; Wilharm, Sylvia; Kalender, Willi A

    2008-01-01

    An individual balancing between spatial resolution and image noise is necessary to fulfil the diagnostic requirements in medical CT imaging. In order to change influencing parameters, such as reconstruction kernel or effective slice thickness, additional raw-data-dependent image reconstructions have to be performed. Therefore, the noise versus resolution trade-off is time consuming and not interactively applicable. Furthermore, isotropic resolution, expressed by an equivalent point spread function (PSF) in every spatial direction, is important for the undistorted visualization and quantitative evaluation of small structures independent of the viewing plane. Theoretically, isotropic resolution can be obtained by matching the in-plane and through-plane resolution with the aforementioned parameters. Practically, however, the user is not assisted in doing so by current reconstruction systems and therefore isotropic resolution is not commonly achieved, in particular not at the desired resolution level. In this paper, an integrated approach is presented for equalizing the in-plane and through-plane spatial resolution by image filtering. The required filter kernels are calculated from previously measured PSFs in x/y- and z-direction. The concepts derived are combined with a variable resolution filtering technique. Both approaches are independent of CT raw data and operate only on reconstructed images which allows for their application in real time. Thereby, the aim of interactively variable, isotropic resolution is achieved. Results were evaluated quantitatively by measuring PSFs and image noise, and qualitatively by comparing the images to direct reconstructions regarded as the gold standard. Filtered images matched direct reconstructions with arbitrary reconstruction kernels with standard deviations in difference images of typically between 1 and 17 HU. Isotropic resolution was achieved within 5% of the selected resolution level. Processing times of 20-100 ms per frame

  1. Higher surface mass balance of the Greenland ice sheet revealed by high - resolution climate modeling

    NARCIS (Netherlands)

    Ettema, Janneke; van den Broeke, Michiel R.; van Meijgaard, Erik; van de Berg, Willem Jan; Bamber, Jonathan L.; Box, Jason E.; Bales, Roger C.

    2009-01-01

    High‐resolution (∼11 km) regional climate modeling shows total annual precipitation on the Greenland ice sheet for 1958–2007 to be up to 24% and surface mass balance up to 63% higher than previously thought. The largest differences occur in coastal southeast Greenland, where the much higher

  2. Scalable Algorithms for Large High-Resolution Terrain Data

    DEFF Research Database (Denmark)

    Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan

    2010-01-01

    In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...

  3. Grizzly bear habitat selection is scale dependent.

    Science.gov (United States)

    Ciarniello, Lana M; Boyce, Mark S; Seip, Dale R; Heard, Douglas C

    2007-07-01

    The purpose of our study is to show how ecologists' interpretation of habitat selection by grizzly bears (Ursus arctos) is altered by the scale of observation and also how management questions would be best addressed using predetermined scales of analysis. Using resource selection functions (RSF) we examined how variation in the spatial extent of availability affected our interpretation of habitat selection by grizzly bears inhabiting mountain and plateau landscapes. We estimated separate models for females and males using three spatial extents: within the study area, within the home range, and within predetermined movement buffers. We employed two methods for evaluating the effects of scale on our RSF designs. First, we chose a priori six candidate models, estimated at each scale, and ranked them using Akaike Information Criteria. Using this method, results changed among scales for males but not for females. For female bears, models that included the full suite of covariates predicted habitat use best at each scale. For male bears that resided in the mountains, models based on forest successional stages ranked highest at the study-wide and home range extents, whereas models containing covariates based on terrain features ranked highest at the buffer extent. For male bears on the plateau, each scale estimated a different highest-ranked model. Second, we examined differences among model coefficients across the three scales for one candidate model. We found that both the magnitude and direction of coefficients were dependent upon the scale examined; results varied between landscapes, scales, and sexes. Greenness, reflecting lush green vegetation, was a strong predictor of the presence of female bears in both landscapes and males that resided in the mountains. Male bears on the plateau were the only animals to select areas that exposed them to a high risk of mortality by humans. Our results show that grizzly bear habitat selection is scale dependent. Further, the

  4. Drought assessment for cropland of Central America using course-resolution remote sensing data

    Science.gov (United States)

    Chen, C. F.; Nguyen, S. T.; Chen, C. R.; Chiang, S. H.; Chang, L. Y.; Khin, L. V.

    2015-12-01

    Drought is one of the most frequent and costliest natural disasters, which imposes enormous effects to human societies and ecosystems. Agricultural drought is referred to an interval of time, such as weeks or months, when the soil moisture supply of a region consistently falls below the appropriate moisture supply leading to negative impacts on agricultural production. Millions of households in Central America were dependent upon major food crops, including maize, beans, and sorghum, for their daily subsistence. In recent years, impacts of climate change through global warming in forms of higher temperature and widespread rainfall deficits have however triggered severe drought during the primera cropping season (April-August) in the study region, causing profound impacts on agriculture, crop production losses, increased market food prices, as well as food security issues. This study focuses on investigating agricultural droughts for cropland of Central America using the Moderate Resolution Imaging Spectroradiometer (MODIS) data. We processed the data for a normal year 2013 and an abnormal year 2014 using a simple vegetation health index (VHI) that is developed based on the temperature condition index (TCI) and vegetation condition index (VCI). The VHI results were validated using the Advanced Microwave Scanning Radiometer 2 (AMSR2) precipitation data and temperature vegetation dryness index (TVDI) that is developed based on the empirical analysis of TCI and VCI data. The correlation coefficients (r) obtained by comparisons between the VHI data and the AMSR2 precipitation and TVDI data were higher than 0.62 and -0.61, respectively. The severe drought was intensive during the dry season (January-April) and likely backed to normal conditions in May with the onset of rainy season. The larger area of serve drought was observed for the 2014 primera season, especially during April-July. When investigating the cultivated areas affected by severe drought in the primera

  5. High spatial resolution upgrade of the electron cyclotron emission radiometer for the DIII-D tokamak.

    Science.gov (United States)

    Truong, D D; Austin, M E

    2014-11-01

    The 40-channel DIII-D electron cyclotron emission (ECE) radiometer provides measurements of Te(r,t) at the tokamak midplane from optically thick, second harmonic X-mode emission over a frequency range of 83-130 GHz. The frequency spacing of the radiometer's channels results in a spatial resolution of ∼1-3 cm, depending on local magnetic field and electron temperature. A new high resolution subsystem has been added to the DIII-D ECE radiometer to make sub-centimeter (0.6-0.8 cm) resolution Te measurements. The high resolution subsystem branches off from the regular channels' IF bands and consists of a microwave switch to toggle between IF bands, a switched filter bank for frequency selectivity, an adjustable local oscillator and mixer for further frequency down-conversion, and a set of eight microwave filters in the 2-4 GHz range. Higher spatial resolution is achieved through the use of a narrower (200 MHz) filter bandwidth and closer spacing between the filters' center frequencies (250 MHz). This configuration allows for full coverage of the 83-130 GHz frequency range in 2 GHz bands. Depending on the local magnetic field, this translates into a "zoomed-in" analysis of a ∼2-4 cm radial region. Expected uses of these channels include mapping the spatial dependence of Alfven eigenmodes, geodesic acoustic modes, and externally applied magnetic perturbations. Initial Te measurements, which demonstrate that the desired resolution is achieved, are presented.

  6. Spatially pooled depth-dependent reservoir storage, elevation, and water-quality data for selected reservoirs in Texas, January 1965-January 2010

    Science.gov (United States)

    Burley, Thomas E.; Asquith, William H.; Brooks, Donald L.

    2011-01-01

    The U.S. Geological Survey (USGS), in cooperation with Texas Tech University, constructed a dataset of selected reservoir storage (daily and instantaneous values), reservoir elevation (daily and instantaneous values), and water-quality data from 59 reservoirs throughout Texas. The period of record for the data is as large as January 1965-January 2010. Data were acquired from existing databases, spreadsheets, delimited text files, and hard-copy reports. The goal was to obtain as much data as possible; therefore, no data acquisition restrictions specifying a particular time window were used. Primary data sources include the USGS National Water Information System, the Texas Commission on Environmental Quality Surface Water-Quality Management Information System, and the Texas Water Development Board monthly Texas Water Condition Reports. Additional water-quality data for six reservoirs were obtained from USGS Texas Annual Water Data Reports. Data were combined from the multiple sources to create as complete a set of properties and constituents as the disparate databases allowed. By devising a unique per-reservoir short name to represent all sites on a reservoir regardless of their source, all sampling sites at a reservoir were spatially pooled by reservoir and temporally combined by date. Reservoir selection was based on various criteria including the availability of water-quality properties and constituents that might affect the trophic status of the reservoir and could also be important for understanding possible effects of climate change in the future. Other considerations in the selection of reservoirs included the general reservoir-specific period of record, the availability of concurrent reservoir storage or elevation data to match with water-quality data, and the availability of sample depth measurements. Additional separate selection criteria included historic information pertaining to blooms of golden algae. Physical properties and constituents were water

  7. Fast generation of multiple resolution instances of raster data sets

    DEFF Research Database (Denmark)

    Arge, Lars; Haverkort, Herman; Tsirogiannis, Constantinos

    2012-01-01

    In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants of this prob......In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants...... in the main memory of the computer. We also provide two algorithms that solve this problem in external memory, that is when the input raster is larger than the main memory. The first external algorithm is very easy to implement and requires O(sort(N)) data block transfers from/to the external memory....... For this variant we describe an algorithm that runs in (U logN) time in internal memory, where U is the size of the output. We show how this algorithm can be adapted to perform efficiently in the external memory using O(sort(U)) data transfers from the disk. We have also implemented two of the presented algorithms...

  8. Higher Resolution and Faster MRI of 31Phosphorus in Bone

    Science.gov (United States)

    Frey, Merideth; Barrett, Sean; Sethna, Zachary; Insogna, Karl; Vanhouten, Joshua

    2013-03-01

    Probing the internal composition of bone on the sub-100 μm length scale is important to study normal features and to look for signs of disease. However, few useful non-destructive techniques are available to evaluate changes in the bone mineral chemical structure and functional micro-architecture on the interior of bones. MRI would be an excellent candidate, but bone is a particularly challenging tissue to study given the relatively low water density, wider linewidths of its solid components leading to low spatial resolution, and the long imaging time compared to conventional 1H MRI. Our lab has recently made advances in obtaining high spatial resolution (sub-400 μm)3 three-dimensional 31Phosphorus MRI of bone through use of the quadratic echo line-narrowing sequence (1). In this talk, we describe our current results using proton decoupling to push this technique even further towards the factor of 1000 increase in spatial resolution imposed by fundamental limits. We also discuss our work to speed up imaging through novel, faster reconstruction algorithms that can reconstruct the desired image from very sparse data sets. (1) M. Frey, et al. PNAS 109: 5190 (2012).

  9. Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts

    Science.gov (United States)

    Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.

    2009-01-01

    We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.

  10. Higher-order Bessel like beams with z-dependent cone angles

    CSIR Research Space (South Africa)

    Ismail, Y

    2010-08-01

    Full Text Available .64.81.22. Terms of Use: http://spiedl.org/terms Fig.5: Optical design to generate z-dependent Bessel-like beams 4. CONSIDERING A MATHEMATICAL APPROACH TO EXPLAINING Z-DEPENDENT BLB?S The stationary phase method is implemented in order to confirm... on higher-order z-dependent BLB?s [6]. 5. EXPERIMENTALLY GENERATED Z-DEPENDENT BESSEL-LIKE BEAMS From the above in can be deduced that these beams are Bessel-like hence they are so named z-dependent Bessel-like beams. These beams are produced however...

  11. Valles Marineris, Mars: High-Resolution Digital Terrain Model on the basis of Mars-Express HRSC data

    Science.gov (United States)

    Dumke, A.; Spiegel, M.; van Gasselt, S.; Neukum, G.

    2009-04-01

    the DTM quality, image mosaicking also depends on the quality of exterior orientation data, and in order to generate high resolution DTMs and ortho-images, these data have to be corrected. For this purpose, new exterior and interior orientation data, based on tie-point matching and bundle adjustment have been used. The automated determination of tie points by software provided by the Leibniz Universität Hannover [6] are used as input in the bundle adjustment, provided by the Technische Universität München and Freie Universität Berlin. The bundle adjustment approach for photogrammetric point determination with a three-line camera is a least squares adjustment based on the well known collinearity equations. The approach estimates the parameters of the exterior orientation only at a few selected image lines. Because of Doppler shift measurements to estimate the position of the orbiter there are systematic effects in the observed exterior orientation. To model these effects in the bundle adjustment, additional observation equations for bias (offset) and drift have to be introduced. To use the MOLA DTM as control information, the least squares adjustment has to be extended with an additional observation equation for each HRSC point. These observations describe a relation between the MOLA DTM and these HRSC points. This approach is described in more detail in [7,8]. Derivation of DTMs and ortho-image mosaics are basically performed using software developed at the German Aerospace Center (DLR), Berlin and is using the Vicar environment developed at JPL. For the DTM derivation, the main processing tasks are first a prerectification of image data using the global MOLA-based DTM, then a least-squares area-based matching between nadir and the other channels (stereo and photometry) in a pyramidal approach and finally, DTM raster generation. Iterative low-pass image filtering (Gauss and mean filtering) is applied in order to improve the image matching process by increasing the

  12. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    Science.gov (United States)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model

  13. Hyper-resolution urban flood modeling using high-resolution radar precipitation and LiDAR data

    Science.gov (United States)

    Noh, S. J.; Lee, S.; Lee, J.; Seo, D. J.

    2016-12-01

    Floods occur most frequently among all natural hazards, often causing widespread economic damage and loss of human lives. In particular, urban flooding is becoming increasingly costly and difficult to manage with a greater concentration of population and assets in urban centers. Despite of known benefits for accurate representation of small scale features and flow interaction among different flow domains, which have significant impact on flood propagation, high-resolution modeling has not been fully utilized due to expensive computation and various uncertainties from model structure, input and parameters. In this study, we assess the potential of hyper-resolution hydrologic-hydraulic modeling using high-resolution radar precipitation and LiDAR data for improved urban flood prediction and hazard mapping. We describe a hyper-resolution 1D-2D coupled urban flood model for pipe and surface flows and evaluate the accuracy of the street-level inundation information produced. For detailed geometric representation of urban areas and for computational efficiency, we use 1 m-resolution topographical data, processed from LiDAR measurements, in conjunction with adaptive mesh refinement. For street-level simulation in large urban areas at grid sizes of 1 to 10 m, a hybrid parallel computing scheme using MPI and openMP is also implemented in a high-performance computing system. The modeling approach developed is applied for the Johnson Creek Catchment ( 40 km2), which makes up the Arlington Urban Hydroinformatics Testbed. In addition, discussion will be given on availability of hyper-resolution simulation archive for improved real-time flood mapping.

  14. High Resolution Aerosol Data from MODIS Satellite for Urban Air Quality Studies

    Science.gov (United States)

    Chudnovsky, A.; Lyapustin, A.; Wang, Y.; Tang, C.; Schwartz, J.; Koutrakis, P.

    2013-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) provides daily global coverage, but the 10 km resolution of its aerosol optical depth (AOD) product is not suitable for studying spatial variability of aerosols in urban areas. Recently, a new Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm was developed for MODIS which provides AOD at 1 km resolution. Using MAIAC data, the relationship between MAIAC AOD and PM(sub 2.5) as measured by the 27 EPA ground monitoring stations was investigated. These results were also compared to conventional MODIS 10 km AOD retrievals (MOD04) for the same days and locations. The coefficients of determination for MOD04 and for MAIAC are R(exp 2) =0.45 and 0.50 respectively, suggested that AOD is a reasonably good proxy for PM(sub 2.5) ground concentrations. Finally, we studied the relationship between PM(sub 2.5) and AOD at the intra-urban scale (10 km) in Boston. The fine resolution results indicated spatial variability in particle concentration at a sub-10 kilometer scale. A local analysis for the Boston area showed that the AOD-PM(sub 2.5) relationship does not depend on relative humidity and air temperatures below approximately 7 C. The correlation improves for temperatures above 7 - 16 C. We found no dependence on the boundary layer height except when the former was in the range 250-500 m. Finally, we apply a mixed effects model approach to MAIAC aerosol optical depth (AOD) retrievals from MODIS to predict PM(sub 2.5) concentrations within the greater Boston area. With this approach we can control for the inherent day-to-day variability in the AOD-PM(sub 2.5) relationship, which depends on time-varying parameters such as particle optical properties, vertical and diurnal concentration profiles and ground surface reflectance. Our results show that the model-predicted PM(sub 2.5) mass concentrations are highly correlated with the actual observations (out-of-sample R(exp 2) of 0.86). Therefore, adjustment

  15. Satellite microwave remote sensing of North Eurasian inundation dynamics: development of coarse-resolution products and comparison with high-resolution synthetic aperture radar data

    International Nuclear Information System (INIS)

    Schroeder, R; Rawlins, M A; McDonald, K C; Podest, E; Zimmermann, R; Kueppers, M

    2010-01-01

    Wetlands are not only primary producers of atmospheric greenhouse gases but also possess unique features that are favourable for application of satellite microwave remote sensing to monitoring their status and trend. In this study we apply combined passive and active microwave remote sensing data sets from the NASA sensors AMSR-E and QuikSCAT to map surface water dynamics over Northern Eurasia. We demonstrate our method on the evolution of large wetland complexes for two consecutive years from January 2006 to December 2007. We apply river discharge measurements from the Ob River along with land surface runoff simulations derived from the Pan-Arctic Water Balance Model during and after snowmelt in 2006 and 2007 to interpret the abundance of widespread flooding along the River Ob in early summer of 2007 observed in the remote sensing products. The coarse-resolution, 25 km, surface water product is compared to a high-resolution, 30 m, inundation map derived from ALOS PALSAR (Advanced Land Observation Satellite phased array L-band synthetic aperture radar) imagery acquired for 11 July 2006, and extending along a transect in the central Western Siberian Plain. We found that the surface water fraction derived from the combined AMSR-E/QuikSCAT data sets closely tracks the inundation mapped using higher-resolution ALOS PALSAR data.

  16. Evaluation of distance-dependent resolution compensation in brain SPECT

    International Nuclear Information System (INIS)

    Badger, D.P.; Barnden, L.R.

    2010-01-01

    Full text: Conventional SPECT reconstruction assumes that the volume of response for each collimator hole is a cylinder, but due to the finite depth of the holes, the volume of response is actually cone shaped. This leads to a loss of resolution as the distance from the collimator face is increased. If distance-dependent resolution compensation (DRC) is incorporated into an iterative reconstruction algorithm, then some of the lost resolution can be recovered (T Yokoi, H Shinohara and H Onishi. 2002, Ann Nuc Med, 16, 11-18). ORC has recently been included in some commercial reconstruction software, and the aim of this study was to assess whether the significantly increased reconstruction processing time can be justified for clinical or for research purposes. HMPAO brain scans from 104 healthy subjects were reconstructed using iterative OSEM, with and without ORC. A voxel based iterative sensitivity (VBIS) technique was used for gain correction in the scans. A Statistical Parametric Mapping (SPM) analysis found the statistical strength of the SPECT aging effect increased when the non-DRC image set was compared to the images with ORC, probably due to improvement in the imaging of partial volume effects when the interhemispheric fissure and sulci enlarge with age (L Barnden, S Behin Ain, R Kwiatek, R Casse and L Yelland. 2005, Nuc Med Comm, 26, 497-503). It was concluded that the use of ORC is justified for research purposes, but may not be for routine clinical use. (author)

  17. SAGA GIS based processing of spatial high resolution temperature data

    International Nuclear Information System (INIS)

    Gerlitz, Lars; Bechtel, Benjamin; Kawohl, Tobias; Boehner, Juergen; Zaksek, Klemen

    2013-01-01

    Many climate change impact studies require surface and near surface temperature data with high spatial and temporal resolution. The resolution of state of the art climate models and remote sensing data is often by far to coarse to represent the meso- and microscale distinctions of temperatures. This is particularly the case for regions with a huge variability of topoclimates, such as mountainous or urban areas. Statistical downscaling techniques are promising methods to refine gridded temperature data with limited spatial resolution, particularly due to their low demand for computer capacity. This paper presents two downscaling approaches - one for climate model output and one for remote sensing data. Both are methodically based on the FOSS-GIS platform SAGA. (orig.)

  18. A case history of using high-resolution LiDAR data to support archaeological prediction models in a low-relief area

    Science.gov (United States)

    Pacskó, Vivien; Székely, Balázs; Stibrányi, Máté; Koma, Zsófia

    2016-04-01

    Transdanubian Range characterized by NNW-SSE directed valleys. One of the largest valleys is a conspicuously straight valley section of the River Sárvíz between Székesfehérvár and Szekszárd. Archaeological surveys revealed various settlement remains since the Neolithic. LiDAR data acquisition has been carried out in the framework of an EUFAR project supported by the European Union. Although the weather conditions were not optimal during the flight, sophisticated processing (carried out with of OPALS software) removed most of the artifacts. The resulting 1 m resolution digital terrain model (DTM) has been used to out. This DTM and the known archaeological site locations were integrated in a GIS system for qualitative and quantitative analysis. The processing aimed at analyzing elevation patterns of archaeological sites: local microtopographic features have been outlined and local low-relief elevation data have been extracted and analysed along the Sárvíz valley. Sites have been grouped according to the age of the artifacts identified by the quick-look archaeological walkthrough surveys. The topographic context of these elevation patterns were compared to the relative relief positions of the sites. Some ages groups have confined elevation ranges that may indicate hydrological/climate dependency of the settlement site selection, whereas some long-lived sites can also be identified, typically further away from the local erosional base. Extremely low-relief areas are supposed to have had swampy or partly inundated environmental conditions in ancient times; these areas were unsuitable for human settlement for long time periods. Such areas can be typically attributed by low predictive probabilities, whereas small mounds, patches of topographic unevenness would get higher model probabilities. The final the models can be used for focused field surveys that can improve our archaeological knowledge of the area. The data used were acquired in the framework of the EUFAR ARMSRACE

  19. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  20. Fast generation of multiple resolution instances of raster data sets

    NARCIS (Netherlands)

    Arge, L.; Haverkort, H.J.; Tsirogiannis, C.P.

    2012-01-01

    In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants of this

  1. THE INFLUENCE OF SPATIAL RESOLUTION ON NONLINEAR FORCE-FREE MODELING

    Energy Technology Data Exchange (ETDEWEB)

    DeRosa, M. L.; Schrijver, C. J. [Lockheed Martin Solar and Astrophysics Laboratory, 3251 Hanover St. B/252, Palo Alto, CA 94304 (United States); Wheatland, M. S.; Gilchrist, S. A. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); Leka, K. D.; Barnes, G. [NorthWest Research Associates, 3380 Mitchell Ln., Boulder, CO 80301 (United States); Amari, T.; Canou, A. [CNRS, Centre de Physique Théorique de l’École Polytechnique, F-91128, Palaiseau Cedex (France); Thalmann, J. K. [Institute of Physics/IGAM, University of Graz, Universitätsplatz 5, A-8010 Graz (Austria); Valori, G. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Wiegelmann, T. [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077, Göttingen (Germany); Malanushenko, A. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Sun, X. [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Régnier, S. [Department of Mathematics and Information Sciences, Faculty of Engineering and Environment, Northumbria University, Newcastle-Upon-Tyne, NE1 8ST (United Kingdom)

    2015-10-01

    The nonlinear force-free field (NLFFF) model is often used to describe the solar coronal magnetic field, however a series of earlier studies revealed difficulties in the numerical solution of the model in application to photospheric boundary data. We investigate the sensitivity of the modeling to the spatial resolution of the boundary data, by applying multiple codes that numerically solve the NLFFF model to a sequence of vector magnetogram data at different resolutions, prepared from a single Hinode/Solar Optical Telescope Spectro-Polarimeter scan of NOAA Active Region 10978 on 2007 December 13. We analyze the resulting energies and relative magnetic helicities, employ a Helmholtz decomposition to characterize divergence errors, and quantify changes made by the codes to the vector magnetogram boundary data in order to be compatible with the force-free model. This study shows that NLFFF modeling results depend quantitatively on the spatial resolution of the input boundary data, and that using more highly resolved boundary data yields more self-consistent results. The free energies of the resulting solutions generally trend higher with increasing resolution, while relative magnetic helicity values vary significantly between resolutions for all methods. All methods require changing the horizontal components, and for some methods also the vertical components, of the vector magnetogram boundary field in excess of nominal uncertainties in the data. The solutions produced by the various methods are significantly different at each resolution level. We continue to recommend verifying agreement between the modeled field lines and corresponding coronal loop images before any NLFFF model is used in a scientific setting.

  2. Multivariate curve resolution of incomplete fused multiset data from chromatographic and spectrophotometric analyses for drug photostability studies

    International Nuclear Information System (INIS)

    Luca, Michele De; Ragno, Gaetano; Ioele, Giuseppina; Tauler, Romà

    2014-01-01

    Highlights: • A new MCR-ALS algorithm is proposed for the analysis of incomplete fused multiset. • Resolution of the data allowed the description of amiloride kinetic photodegradation. • The new MCR-ALS algorithm can be easily applied to other drugs and chemicals. - Abstract: An advanced and powerful chemometric approach is proposed for the analysis of incomplete multiset data obtained by fusion of hyphenated liquid chromatographic DAD/MS data with UV spectrophotometric data from acid–base titration and kinetic degradation experiments. Column- and row-wise augmented data blocks were combined and simultaneously processed by means of a new version of the multivariate curve resolution-alternating least squares (MCR-ALS) technique, including the simultaneous analysis of incomplete multiset data from different instrumental techniques. The proposed procedure was applied to the detailed study of the kinetic photodegradation process of the amiloride (AML) drug. All chemical species involved in the degradation and equilibrium reactions were resolved and the pH dependent kinetic pathway described

  3. Multivariate curve resolution of incomplete fused multiset data from chromatographic and spectrophotometric analyses for drug photostability studies

    Energy Technology Data Exchange (ETDEWEB)

    Luca, Michele De, E-mail: michele.deluca@unical.it [Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Via P. Bucci, Rende, CS 87036 (Italy); Ragno, Gaetano; Ioele, Giuseppina [Department of Pharmacy, Health and Nutritional Sciences, University of Calabria, Via P. Bucci, Rende, CS 87036 (Italy); Tauler, Romà [Department of Environmental Chemistry, IDAEA-CSIC, C/Jordi Girona, 18-26, Barcelona 08034 (Spain)

    2014-07-21

    Highlights: • A new MCR-ALS algorithm is proposed for the analysis of incomplete fused multiset. • Resolution of the data allowed the description of amiloride kinetic photodegradation. • The new MCR-ALS algorithm can be easily applied to other drugs and chemicals. - Abstract: An advanced and powerful chemometric approach is proposed for the analysis of incomplete multiset data obtained by fusion of hyphenated liquid chromatographic DAD/MS data with UV spectrophotometric data from acid–base titration and kinetic degradation experiments. Column- and row-wise augmented data blocks were combined and simultaneously processed by means of a new version of the multivariate curve resolution-alternating least squares (MCR-ALS) technique, including the simultaneous analysis of incomplete multiset data from different instrumental techniques. The proposed procedure was applied to the detailed study of the kinetic photodegradation process of the amiloride (AML) drug. All chemical species involved in the degradation and equilibrium reactions were resolved and the pH dependent kinetic pathway described.

  4. High spatial resolution upgrade of the electron cyclotron emission radiometer for the DIII-D tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Truong, D. D., E-mail: dtruong@wisc.edu [Department of Engineering Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Austin, M. E. [Institute for Fusion Studies, University of Texas, Austin, Texas, 78712 (United States)

    2014-11-15

    The 40-channel DIII-D electron cyclotron emission (ECE) radiometer provides measurements of T{sub e}(r,t) at the tokamak midplane from optically thick, second harmonic X-mode emission over a frequency range of 83–130 GHz. The frequency spacing of the radiometer's channels results in a spatial resolution of ∼1–3 cm, depending on local magnetic field and electron temperature. A new high resolution subsystem has been added to the DIII-D ECE radiometer to make sub-centimeter (0.6–0.8 cm) resolution T{sub e} measurements. The high resolution subsystem branches off from the regular channels’ IF bands and consists of a microwave switch to toggle between IF bands, a switched filter bank for frequency selectivity, an adjustable local oscillator and mixer for further frequency down-conversion, and a set of eight microwave filters in the 2–4 GHz range. Higher spatial resolution is achieved through the use of a narrower (200 MHz) filter bandwidth and closer spacing between the filters’ center frequencies (250 MHz). This configuration allows for full coverage of the 83–130 GHz frequency range in 2 GHz bands. Depending on the local magnetic field, this translates into a “zoomed-in” analysis of a ∼2–4 cm radial region. Expected uses of these channels include mapping the spatial dependence of Alfven eigenmodes, geodesic acoustic modes, and externally applied magnetic perturbations. Initial T{sub e} measurements, which demonstrate that the desired resolution is achieved, are presented.

  5. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  6. Pressure drop effects on selectivity and resolution in packed-column supercritical fluid chromatography

    NARCIS (Netherlands)

    Lou, X.W.; Janssen, J.G.M.; Snijders, H.M.J.; Cramers, C.A.M.G.

    1996-01-01

    The influence of pressure drop on retention, selectivity, plate height and resolution was investigated systematically in packed supercritical fluid chromatography (SFC) using pure carbon dioxide as the mobile phase. Numerical methods developed previously which enabled the prediction of pressure

  7. Recovering the colour-dependent albedo of exoplanets with high-resolution spectroscopy: from ESPRESSO to the ELT.

    Science.gov (United States)

    Martins, J. H. C.; Figueira, P.; Santos, N. C.; Melo, C.; Garcia Muñoz, A.; Faria, J.; Pepe, F.; Lovis, C.

    2018-05-01

    The characterization of planetary atmospheres is a daunting task, pushing current observing facilities to their limits. The next generation of high-resolution spectrographs mounted on large telescopes - such as ESPRESSO@VLT and HIRES@ELT - will allow us to probe and characterize exoplanetary atmospheres in greater detail than possible to this point. We present a method that permits the recovery of the colour-dependent reflectivity of exoplanets from high-resolution spectroscopic observations. Determining the wavelength-dependent albedo will provide insight into the chemical properties and weather of the exoplanet atmospheres. For this work, we simulated ESPRESSO@VLT and HIRES@ELT high-resolution observations of known planetary systems with several albedo configurations. We demonstrate how the cross correlation technique applied to theses simulated observations can be used to successfully recover the geometric albedo of exoplanets over a range of wavelengths. In all cases, we were able to recover the wavelength dependent albedo of the simulated exoplanets and distinguish between several atmospheric models representing different atmospheric configurations. In brief, we demonstrate that the cross correlation technique allows for the recovery of exoplanetary albedo functions from optical observations with the next generation of high-resolution spectrographs that will be mounted on large telescopes with reasonable exposure times. Its recovery will permit the characterization of exoplanetary atmospheres in terms of composition and dynamics and consolidates the cross correlation technique as a powerful tool for exoplanet characterization.

  8. New Horizontal Inequalities in German Higher Education? Social Selectivity of Studying Abroad between 1991 and 2012

    Science.gov (United States)

    Netz, Nicolai; Finger, Claudia

    2016-01-01

    On the basis of theories of cultural reproduction and rational choice, we examine whether access to study-abroad opportunities is socially selective and whether this pattern changed during educational expansion. We test our hypotheses for Germany by combining student survey data and administrative data on higher education entry rates. We find that…

  9. Resolution of the neutron diffractometer of the Mexican Nuclear Center

    International Nuclear Information System (INIS)

    Macias B, L.R.; Garcia C, R.M.; Ita T, A. De

    2003-01-01

    The neutron diffractometer has three collimators and a monochromator of which it depends the resolution of the same one and exists a commitment between the resolution of the diffractometer and its intensity; if it is sought to work with more resolution, the intensity will diminish, and also, if one has little volume of the material, the diffracted light it is diminished, so the selection of the values of the collimators is this way important to have an unique value of the resolution of the diffractometer. (Author)

  10. Quality Management in Higher Education

    OpenAIRE

    Svoboda, Petr

    2017-01-01

    The thesis deals with quality management theory as an important part of management science. The primary objective of this work is an identification, formulation and analysis of such managerial issues in quality of higher education, which either are not known, or whose resolution is not considered fully sufficient. The thesis contains a bibliography of more than 200 related scientific works and presents selected issues of quality management in higher education, such as quality perception or it...

  11. HyperHamlet – Intricacies of Data Selection

    Directory of Open Access Journals (Sweden)

    Quassdorf, Sixta

    2009-01-01

    Full Text Available HyperHamlet is a database of allusions to and quotations from Shakespeare's Hamlet, which is supported by the Swiss National Science Foundation as a joint venture between the Departments of English and German Philology, and the Image & Media Lab at the University of Basel. The compilation of a corpus, whose aim it is to document the "Shakespeare phenomenon", is intricate on more than one level: the desired transdisciplinary approach between linguistics, literary and cultural studies entails data selection from a vast variety of sources; the pragmatic nature of intertextual traces, i.e. their dependence on and subordination to new contexts, further adds to formal heterogeneity. This is not only a challenge for annotation, but also for data selection. As the recognition of intertextual traces is more often than not based on intuition, this paper analyses the criteria which underlie intuition so that it can be operationalised for scholarly corpus compilation. An analogue to the pragmatic model of ostensive-inferential communication with its three constitutive parts of speaker's meaning, sentence meaning and hearer's meaning has been used for analytical heuristics. Authorial intent – in a concrete as well as in an abstract historical sense – origin and specific encyclopaedic knowledge have been found to be the basic assumptions underlying data selection, while quantitative factors provide supporting evidence.

  12. TERRA REF: Advancing phenomics with high resolution, open access sensor and genomics data

    Science.gov (United States)

    LeBauer, D.; Kooper, R.; Burnette, M.; Willis, C.

    2017-12-01

    Automated plant measurement has the potential to improve understanding of genetic and environmental controls on plant traits (phenotypes). The application of sensors and software in the automation of high throughput phenotyping reflects a fundamental shift from labor intensive hand measurements to drone, tractor, and robot mounted sensing platforms. These tools are expected to speed the rate of crop improvement by enabling plant breeders to more accurately select plants with improved yields, resource use efficiency, and stress tolerance. However, there are many challenges facing high throughput phenomics: sensors and platforms are expensive, currently there are few standard methods of data collection and storage, and the analysis of large data sets requires high performance computers and automated, reproducible computing pipelines. To overcome these obstacles and advance the science of high throughput phenomics, the TERRA Phenotyping Reference Platform (TERRA-REF) team is developing an open-access database of high resolution sensor data. TERRA REF is an integrated field and greenhouse phenotyping system that includes: a reference field scanner with fifteen sensors that can generate terrabytes of data each day at mm resolution; UAV, tractor, and fixed field sensing platforms; and an automated controlled-environment scanner. These platforms will enable investigation of diverse sensing modalities, and the investigation of traits under controlled and field environments. It is the goal of TERRA REF to lower the barrier to entry for academic and industry researchers by providing high-resolution data, open source software, and online computing resources. Our project is unique in that all data will be made fully public in November 2018, and is already available to early adopters through the beta-user program. We will describe the datasets and how to use them as well as the databases and computing pipeline and how these can be reused and remixed in other phenomics pipelines

  13. A variable resolution right TIN approach for gridded oceanographic data

    Science.gov (United States)

    Marks, David; Elmore, Paul; Blain, Cheryl Ann; Bourgeois, Brian; Petry, Frederick; Ferrini, Vicki

    2017-12-01

    Many oceanographic applications require multi resolution representation of gridded data such as for bathymetric data. Although triangular irregular networks (TINs) allow for variable resolution, they do not provide a gridded structure. Right TINs (RTINs) are compatible with a gridded structure. We explored the use of two approaches for RTINs termed top-down and bottom-up implementations. We illustrate why the latter is most appropriate for gridded data and describe for this technique how the data can be thinned. While both the top-down and bottom-up approaches accurately preserve the surface morphology of any given region, the top-down method of vertex placement can fail to match the actual vertex locations of the underlying grid in many instances, resulting in obscured topology/bathymetry. Finally we describe the use of the bottom-up approach and data thinning in two applications. The first is to provide thinned, variable resolution bathymetry data for tests of storm surge and inundation modeling, in particular hurricane Katrina. Secondly we consider the use of the approach for an application to an oceanographic data grid of 3-D ocean temperature.

  14. Selecting materialized views in a data warehouse

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Liu, Daxin

    2003-01-01

    A Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. In this paper, we have addressed and designed algorithm to select a set of views to materialize in order to answer the most queries under the constraint of a given space. The algorithm presented in this paper aim at making out a minimum set of views, by which we can directly respond to as many as possible user"s query requests. We use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithm shows that the proposed algorithm gives a less complexity and higher speeds and feasible expandability.

  15. Data Driven Approach for High Resolution Population Distribution and Dynamics Models

    Energy Technology Data Exchange (ETDEWEB)

    Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Rose, Amy N [ORNL; Liu, Cheng [ORNL; Urban, Marie L [ORNL; Stewart, Robert N [ORNL

    2014-01-01

    High resolution population distribution data are vital for successfully addressing critical issues ranging from energy and socio-environmental research to public health to human security. Commonly available population data from Census is constrained both in space and time and does not capture population dynamics as functions of space and time. This imposes a significant limitation on the fidelity of event-based simulation models with sensitive space-time resolution. This paper describes ongoing development of high-resolution population distribution and dynamics models, at Oak Ridge National Laboratory, through spatial data integration and modeling with behavioral or activity-based mobility datasets for representing temporal dynamics of population. The model is resolved at 1 km resolution globally and describes the U.S. population for nighttime and daytime at 90m. Integration of such population data provides the opportunity to develop simulations and applications in critical infrastructure management from local to global scales.

  16. Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data

    Science.gov (United States)

    Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.

    2017-12-01

    We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).

  17. Spatial scales of pollution from variable resolution satellite imaging

    International Nuclear Information System (INIS)

    Chudnovsky, Alexandra A.; Kostinski, Alex; Lyapustin, Alexei; Koutrakis, Petros

    2013-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) provides daily global coverage, but the 10 km resolution of its aerosol optical depth (AOD) product is not adequate for studying spatial variability of aerosols in urban areas. Recently, a new Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm was developed for MODIS which provides AOD at 1 km resolution. Using MAIAC data, the relationship between MAIAC AOD and PM 2.5 as measured by the EPA ground monitoring stations was investigated at varying spatial scales. Our analysis suggested that the correlation between PM 2.5 and AOD decreased significantly as AOD resolution was degraded. This is so despite the intrinsic mismatch between PM 2.5 ground level measurements and AOD vertically integrated measurements. Furthermore, the fine resolution results indicated spatial variability in particle concentration at a sub-10 km scale. Finally, this spatial variability of AOD within the urban domain was shown to depend on PM 2.5 levels and wind speed. - Highlights: ► The correlation between PM 2.5 and AOD decreases as AOD resolution is degraded. ► High resolution MAIAC AOD 1 km retrieval can be used to investigate within-city PM 2.5 variability. ► Low pollution days exhibit higher spatial variability of AOD and PM 2.5 then moderate pollution days. ► AOD spatial variability within urban area is higher during the lower wind speed conditions. - The correlation between PM 2.5 and AOD decreases as AOD resolution is degraded. The new high-resolution MAIAC AOD retrieval has the potential to capture PM 2.5 variability at the intra-urban scale.

  18. Differentiation and Social Selectivity in German Higher Education

    Science.gov (United States)

    Schindler, Steffen; Reimer, David

    2011-01-01

    In this paper we investigate social selectivity in access to higher education in Germany and, unlike most previous studies, explicitly devote attention to semi-tertiary institutions such as the so-called universities of cooperative education. Drawing on rational choice models of educational decisions we seek to understand which factors influence…

  19. Fossil-fuel dependence and vulnerability of electricity generation: Case of selected European countries

    International Nuclear Information System (INIS)

    Bhattacharyya, Subhes C.

    2009-01-01

    This paper analyses the diversity of fuel mix for electricity generation in selected European countries and investigates how the fuel bill has changed as a share of GDP between 1995 and 2005. The drivers of fuel-dependence-related vulnerability are determined using Laspeyres index decomposition. A 'what-if' analysis is carried out to analyse the changes in the vulnerability index due to changes in the drivers and a scenario analysis is finally used to investigate the future vulnerability in the medium term. The paper finds that the British and the Dutch electricity systems are less diversified compared to three other countries analysed. The gas dependence of the Dutch and Italian systems made them vulnerable but the vulnerability increased in all countries in recent years. Gas price and the level of dependence on gas for power generation mainly influenced the gas vulnerability. The United Kingdom saw a substantial decline in its coal vulnerability due to a fall in coal price and coal dependence in electricity generation. The scenario analysis indicates that UK is likely to face greater gas vulnerability in the future due to increased gas dependence in electricity generation and higher import dependence.

  20. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    Science.gov (United States)

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  1. Unraveling the Thousand Word Picture: An Introduction to Super-Resolution Data Analysis.

    Science.gov (United States)

    Lee, Antony; Tsekouras, Konstantinos; Calderon, Christopher; Bustamante, Carlos; Pressé, Steve

    2017-06-14

    Super-resolution microscopy provides direct insight into fundamental biological processes occurring at length scales smaller than light's diffraction limit. The analysis of data at such scales has brought statistical and machine learning methods into the mainstream. Here we provide a survey of data analysis methods starting from an overview of basic statistical techniques underlying the analysis of super-resolution and, more broadly, imaging data. We subsequently break down the analysis of super-resolution data into four problems: the localization problem, the counting problem, the linking problem, and what we've termed the interpretation problem.

  2. The role of demographics in students' selection of higher education institutions

    Directory of Open Access Journals (Sweden)

    M. Wiese

    2010-12-01

    Full Text Available Purpose: To investigate the choice factors students consider when selecting a higher education institution, with a focus on the differences between gender and language groups. Problem investigated: The educational landscape has seen several changes, such as stronger competition between institutions for both student enrolments and government funding. These market challenges have led to an interest in students' institution selection processes as it has implications for the way higher education institutions (HEIs manage their marketing and recruitment strategies. The research objective of this study was to identify the most important choice factors of prospective South African students. It also aimed to determine if any gender and language differences exist with regard to students' institution selection processes. Methodology: A convenience sample of 1 241 respondents was drawn, representing six South African universities. A self-administrated questionnaire was used to collect the data. Questions from the ASQ (Admitted Student Questionnaire and CIRP (The Cooperative Institutional Research Programme were used and adapted to the South African context after pilot testing. Hypotheses were analysed using the multivariate analysis of variance (MANOVA test with Wilks' lambda as the test statistic. Findings/Implications: Irrespective of gender or language, the most important choice factor for respondents was the quality of teaching at HEIs. The findings showed that males and females differ according to the selection of certain choice factors which suggest that HEIs can consider recruitment strategies for each gender group. Significant differences between the language groups were found for 17 of the 23 choice factors, signalling that different language groups make decisions based on different choice factors. African language-speaking students have, amongst other, indicated that the multiculturalism of the institution is a very important choice factor for

  3. Poster – 02: Positron Emission Tomography (PET) Imaging Reconstruction using higher order Scattered Photon Coincidences

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hongwei; Pistorius, Stephen [Department of Physics and Astronomy, University of Manitoba, CancerCare, Manitoba (Canada)

    2016-08-15

    PET images are affected by the presence of scattered photons. Incorrect scatter-correction may cause artifacts, particularly in 3D PET systems. Current scatter reconstruction methods do not distinguish between single and higher order scattered photons. A dual-scattered reconstruction method (GDS-MLEM) that is independent of the number of Compton scattering interactions and less sensitive to the need for high energy resolution detectors, is proposed. To avoid overcorrecting for scattered coincidences, the attenuation coefficient was calculated by integrating the differential Klein-Nishina cross-section over a restricted energy range, accounting only for scattered photons that were not detected. The optimum image can be selected by choosing an energy threshold which is the upper energy limit for the calculation of the cross-section and the lower limit for scattered photons in the reconstruction. Data was simulated using the GATE platform. 500,000 multiple scattered photon coincidences with perfect energy resolution were reconstructed using various methods. The GDS-MLEM algorithm had the highest confidence (98%) in locating the annihilation position and was capable of reconstructing the two largest hot regions. 100,000 photon coincidences, with a scatter fraction of 40%, were used to test the energy resolution dependence of different algorithms. With a 350–650 keV energy window and the restricted attenuation correction model, the GDS-MLEM algorithm was able to improve contrast recovery and reduce the noise by 7.56%–13.24% and 12.4%–24.03%, respectively. This approach is less sensitive to the energy resolution and shows promise if detector energy resolutions of 12% can be achieved.

  4. Low resolution spectroscopy of selected Algol systems

    Science.gov (United States)

    Devarapalli, Shanti Priya; Jagirdar, Rukmini; Parthasarathy, M.; Sahu, D. K.; Mohan, Vijay; Bhatt, B. C.; Thomas, Vineet S.

    2018-04-01

    The analysis of spectroscopic data for 30 Algol-type binaries is presented. All these systems are short period Algols having primaries with spectral types B and A. Dominant spectral lines were identified for the spectra collected and their equivalent widths were calculated. All the spectra were examined to understand presence of mass transfer, a disk or circumstellar matter and chromospheric emission. We also present first spectroscopic and period study for few Algols and conclude that high resolution spectra within and outside the primary minimum are needed for better understanding of these Algol type close binaries.

  5. Surface Water Mapping from Suomi NPP-VIIRS Imagery at 30 m Resolution via Blending with Landsat Data

    Directory of Open Access Journals (Sweden)

    Chang Huang

    2016-07-01

    Full Text Available Monitoring the dynamics of surface water using remotely sensed data generally requires both high spatial and high temporal resolutions. One effective and popular approach for achieving this is image fusion. This study adopts a widely accepted fusion model, the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM, for blending the newly available coarse-resolution Suomi NPP-VIIRS data with Landsat data in order to derive water maps at 30 m resolution. The Pan-sharpening technique was applied to preprocessing NPP-VIIRS data to achieve a higher-resolution before blending. The modified Normalized Difference Water Index (mNDWI was employed for mapping surface water area. Two fusion alternatives, blend-then-index (BI or index-then-blend (IB, were comparatively analyzed against a Landsat derived water map. A case study of mapping Poyang Lake in China, where water distribution pattern is complex and the water body changes frequently and drastically, was conducted. It has been revealed that the IB method derives more accurate results with less computation time than the BI method. The BI method generally underestimates water distribution, especially when the water area expands radically. The study has demonstrated the feasibility of blending NPP-VIIRS with Landsat for achieving surface water mapping at both high spatial and high temporal resolutions. It suggests that IB is superior to BI for water mapping in terms of efficiency and accuracy. The finding of this study also has important reference values for other blending works, such as image blending for vegetation cover monitoring.

  6. Beyond space and time: advanced selection for seismological data

    Science.gov (United States)

    Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Casey, R. E.; Weertman, B.; Sharer, G.; Benson, R. B.

    2017-12-01

    Separating the available raw data from that useful for any given study is often a tedious step in a research project, particularly for first-order data quality problems such as broken sensors, incorrect response information, and non-continuous time series. With the ever increasing amounts of data available to researchers, this chore becomes more and more time consuming. To assist users in this pre-processing of data, the IRIS Data Management Center (DMC) has created a system called Research Ready Data Sets (RRDS). The RRDS system allows researchers to apply filters that constrain their data request using criteria related to signal quality, response correctness, and high resolution data availability. In addition to the traditional selection methods of stations at a geographic location for given time spans, RRDS will provide enhanced criteria for data selection based on many of the measurements available in the DMC's MUSTANG quality control system. This means that data may be selected based on background noise (tolerance relative to high and low noise Earth models), signal-to-noise ratio for earthquake arrivals, signal RMS, instrument response corrected signal correlation with Earth tides, time tear (gaps/overlaps) counts, timing quality (when reported in the raw data by the datalogger) and more. The new RRDS system is available as a web service designed to operate as a request filter. A request is submitted containing the traditional station and time constraints as well as data quality constraints. The request is then filtered and a report is returned that indicates 1) the request that would subsequently be submitted to a data access service, 2) a record of the quality criteria specified and 3) a record of the data rejected based on those criteria, including the relevant values. This service can be used to either filter a request prior to requesting the actual data or to explore which data match a set of enhanced criteria without downloading the data. We are

  7. Raster microdiffraction with synchrotron radiation of hydrated biopolymers with nanometre step-resolution: case study of starch granules

    International Nuclear Information System (INIS)

    Riekel, C.; Burghammer, M.; Davies, R. J.; Di Cola, E.; König, C.; Lemke, H.T.; Putaux, J.-L.; Schöder, S.

    2010-01-01

    Radiation damage propagation was examined in starch granules by synchrotron radiation micro- and nano-diffraction techniques from cryo- to room temperatures. Careful dose limitation allowed raster-diffraction experiments with 500 nm step resolution to be performed. X-ray radiation damage propagation is explored for hydrated starch granules in order to reduce the step resolution in raster-microdiffraction experiments to the nanometre range. Radiation damage was induced by synchrotron radiation microbeams of 5, 1 and 0.3 µm size with ∼0.1 nm wavelength in B-type potato, Canna edulis and Phajus grandifolius starch granules. A total loss of crystallinity of granules immersed in water was found at a dose of ∼1.3 photons nm −3 . The temperature dependence of radiation damage suggests that primary radiation damage prevails up to about 120 K while secondary radiation damage becomes effective at higher temperatures. Primary radiation damage remains confined to the beam track at 100 K. Propagation of radiation damage beyond the beam track at room temperature is assumed to be due to reactive species generated principally by water radiolysis induced by photoelectrons. By careful dose selection during data collection, raster scans with 500 nm step-resolution could be performed for granules immersed in water

  8. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  9. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  10. Physical fundamentals of high-resolution computerized tomography

    International Nuclear Information System (INIS)

    Kalender, W.A.; Suess, C.

    1985-01-01

    A model is demonstrated allowing on assessment of the influence of various factors on local or spatial resolution. Separate establishment of data collection, picture reconstruction and representation is important. Different aspects depending on device type obtain when collecting data. When assessing and developing device types further, attention should be given to the weakest chain links which determine local resolution. However, we should never forget that local resolution is but one parameter for describing picture quality. Resolution of low contrasts and freedom from artifacts are at least as important parameters for the assessment of the total CT system. (orig.) [de

  11. Concentration dependence of the light yield and energy resolution of NaI:Tl and CsI:Tl crystals excited by gamma, soft X-rays and alpha particles

    CERN Document Server

    Trefilova, L N; Kovaleva, L V; Zaslavsky, B G; Zosim, D I; Bondarenko, S K

    2002-01-01

    Based on the analysis of light yield dependence on activator concentration for NaI:Tl and CsI:Tl excited by gamma-rays, soft X-rays and alpha-particles, an explanation of the effect of energy resolution enhancement with the rise of Tl content has been proposed. Based on the concept regarding the electron track structure, we proposed an alternative explanation of the intrinsic resolution value. The concept does not take into account the non-proportional response to electrons of different energies and is based on the statistic fluctuation of scintillation photon number formed outside and inside the regions of higher ionization density.

  12. Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators.

    Directory of Open Access Journals (Sweden)

    Bradley A Strickland

    Full Text Available Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17 on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their

  13. Scale-dependent habitat selection and size-based dominance in adult male American alligators

    Science.gov (United States)

    Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.

    2016-01-01

    Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance

  14. KML-Based Access and Visualization of High Resolution LiDAR Topography

    Science.gov (United States)

    Crosby, C. J.; Blair, J. L.; Nandigam, V.; Memon, A.; Baru, C.; Arrowsmith, J. R.

    2008-12-01

    Over the past decade, there has been dramatic growth in the acquisition of LiDAR (Light Detection And Ranging) high-resolution topographic data for earth science studies. Capable of providing digital elevation models (DEMs) more than an order of magnitude higher resolution than those currently available, LiDAR data allow earth scientists to study the processes that contribute to landscape evolution at resolutions not previously possible yet essential for their appropriate representation. These datasets also have significant implications for earth science education and outreach because they provide an accurate representation of landforms and geologic hazards. Unfortunately, the massive volume of data produced by LiDAR mapping technology can be a barrier to their use. To make these data available to a larger user community, we have been exploring the use of Keyhole Markup Language (KML) and Google Earth to provide access to LiDAR data products and visualizations. LiDAR digital elevation models are typically delivered in a tiled format that lends itself well to a KML-based distribution system. For LiDAR datasets hosted in the GEON OpenTopography Portal (www.opentopography.org) we have developed KML files that show the extent of available LiDAR DEMs and provide direct access to the data products. Users interact with these KML files to explore the extent of the available data and are able to select DEMs that correspond to their area of interest. Selection of a tile loads a download that the user can then save locally for analysis in their software of choice. The GEON topography system also has tools available that allow users to generate custom DEMs from LiDAR point cloud data. This system is powerful because it enables users to access massive volumes of raw LiDAR data and to produce DEM products that are optimized to their science applications. We have developed a web service that converts the custom DEM models produced by the system to a hillshade that is delivered to

  15. How can we model selectively neutral density dependence in evolutionary games.

    Science.gov (United States)

    Argasinski, Krzysztof; Kozłowski, Jan

    2008-03-01

    The problem of density dependence appears in all approaches to the modelling of population dynamics. It is pertinent to classic models (i.e., Lotka-Volterra's), and also population genetics and game theoretical models related to the replicator dynamics. There is no density dependence in the classic formulation of replicator dynamics, which means that population size may grow to infinity. Therefore the question arises: How is unlimited population growth suppressed in frequency-dependent models? Two categories of solutions can be found in the literature. In the first, replicator dynamics is independent of background fitness. In the second type of solution, a multiplicative suppression coefficient is used, as in a logistic equation. Both approaches have disadvantages. The first one is incompatible with the methods of life history theory and basic probabilistic intuitions. The logistic type of suppression of per capita growth rate stops trajectories of selection when population size reaches the maximal value (carrying capacity); hence this method does not satisfy selective neutrality. To overcome these difficulties, we must explicitly consider turn-over of individuals dependent on mortality rate. This new approach leads to two interesting predictions. First, the equilibrium value of population size is lower than carrying capacity and depends on the mortality rate. Second, although the phase portrait of selection trajectories is the same as in density-independent replicator dynamics, pace of selection slows down when population size approaches equilibrium, and then remains constant and dependent on the rate of turn-over of individuals.

  16. Evaluate Hydrologic Response on Spatiotemporal Characteristics of Rainfall Using High Resolution Radar Rainfall Data and WRF-Hydro Model

    Science.gov (United States)

    Gao, S.; Fang, N. Z.

    2017-12-01

    A previously developed Dynamic Moving Storm (DMS) generator is a multivariate rainfall model simulating the complex nature of precipitation field: spatial variability, temporal variability, and storm movement. Previous effort by the authors has investigated the sensitivity of DMS parameters on corresponding hydrologic responses by using synthetic storms. In this study, the DMS generator has been upgraded to generate more realistic precipitation field. The dependence of hydrologic responses on rainfall features was investigated by dissecting the precipitation field into rain cells and modifying their spatio-temporal specification individually. To retrieve DMS parameters from radar rainfall data, rain cell segmentation and tracking algorithms were respectively developed and applied on high resolution radar rainfall data (1) to spatially determine the rain cells within individual radar image and (2) to temporally analyze their dynamic behavior. Statistics of DMS parameters were established by processing a long record of rainfall data (10 years) to keep the modification on real storms within the limit of regional climatology. Empirical distributions of the DMS parameters were calculated to reveal any preferential pattern and seasonality. Subsequently, the WRF-Hydro model forced by the remodeled and modified precipitation was used for hydrologic simulation. The study area was the Upper Trinity River Basin (UTRB) watershed, Texas; and two kinds of high resolution radar data i.e. the Next-Generation Radar (NEXRAD) level III Digital Hybrid Reflectivity (DHR) product and Multi-Radar Multi-Sensor (MRMS) precipitation rate product, were utilized to establish parameter statistics and to recreate/remodel historical events respectively. The results demonstrated that rainfall duration is a significant linkage between DMS parameters and their hydrologic impacts—any combination of spatiotemporal characteristics that keep rain cells longer over the catchment will produce higher

  17. Zooming into local active galactic nuclei: the power of combining SDSS-IV MaNGA with higher resolution integral field unit observations

    Science.gov (United States)

    Wylezalek, Dominika; Schnorr Müller, Allan; Zakamska, Nadia L.; Storchi-Bergmann, Thaisa; Greene, Jenny E.; Müller-Sánchez, Francisco; Kelly, Michael; Liu, Guilin; Law, David R.; Barrera-Ballesteros, Jorge K.; Riffel, Rogemar A.; Thomas, Daniel

    2017-05-01

    Ionized gas outflows driven by active galactic nuclei (AGN) are ubiquitous in high-luminosity AGN with outflow speeds apparently correlated with the total bolometric luminosity of the AGN. This empirical relation and theoretical work suggest that in the range Lbol ˜ 1043-45 erg s-1 there must exist a threshold luminosity above which the AGN becomes powerful enough to launch winds that will be able to escape the galaxy potential. In this paper, we present pilot observations of two AGN in this transitional range that were taken with the Gemini North Multi-Object Spectrograph integral field unit (IFU). Both sources have also previously been observed within the Sloan Digital Sky Survey-IV (SDSS) Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey. While the MaNGA IFU maps probe the gas fields on galaxy-wide scales and show that some regions are dominated by AGN ionization, the new Gemini IFU data zoom into the centre with four times better spatial resolution. In the object with the lower Lbol we find evidence of a young or stalled biconical AGN-driven outflow where none was obvious at the MaNGA resolution. In the object with the higher Lbol we trace the large-scale biconical outflow into the nuclear region and connect the outflow from small to large scales. These observations suggest that AGN luminosity and galaxy potential are crucial in shaping wind launching and propagation in low-luminosity AGN. The transition from small and young outflows to galaxy-wide feedback can only be understood by combining large-scale IFU data that trace the galaxy velocity field with higher resolution, small-scale IFU maps.

  18. High resolution x-ray fluorescence spectroscopy - a new technique for site- and spin-selectivity

    International Nuclear Information System (INIS)

    Wang, Xin

    1996-12-01

    X-ray spectroscopy has long been used to elucidate electronic and structural information of molecules. One of the weaknesses of x-ray absorption is its sensitivity to all of the atoms of a particular element in a sample. Through out this thesis, a new technique for enhancing the site- and spin-selectivity of the x-ray absorption has been developed. By high resolution fluorescence detection, the chemical sensitivity of K emission spectra can be used to identify oxidation and spin states; it can also be used to facilitate site-selective X-ray Absorption Near Edge Structure (XANES) and site-selective Extended X-ray Absorption Fine Structure (EXAFS). The spin polarization in K fluorescence could be used to generate spin selective XANES or spin-polarized EXAFS, which provides a new measure of the spin density, or the nature of magnetic neighboring atoms. Finally, dramatic line-sharpening effects by the combination of absorption and emission processes allow observation of structure that is normally unobservable. All these unique characters can enormously simplify a complex x-ray spectrum. Applications of this novel technique have generated information from various transition-metal model compounds to metalloproteins. The absorption and emission spectra by high resolution fluorescence detection are interdependent. The ligand field multiplet model has been used for the analysis of Kα and Kβ emission spectra. First demonstration on different chemical states of Fe compounds has shown the applicability of site selectivity and spin polarization. Different interatomic distances of the same element in different chemical forms have been detected using site-selective EXAFS

  19. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data

    International Nuclear Information System (INIS)

    Wang, Tingting; Cao, Lei; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2015-01-01

    Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method. (paper)

  20. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Directory of Open Access Journals (Sweden)

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  1. Temporal resolution for the perception of features and conjunctions.

    Science.gov (United States)

    Bodelón, Clara; Fallah, Mazyar; Reynolds, John H

    2007-01-24

    The visual system decomposes stimuli into their constituent features, represented by neurons with different feature selectivities. How the signals carried by these feature-selective neurons are integrated into coherent object representations is unknown. To constrain the set of possible integrative mechanisms, we quantified the temporal resolution of perception for color, orientation, and conjunctions of these two features. We find that temporal resolution is measurably higher for each feature than for their conjunction, indicating that time is required to integrate features into a perceptual whole. This finding places temporal limits on the mechanisms that could mediate this form of perceptual integration.

  2. Comparing DIF methods for data with dual dependency

    Directory of Open Access Journals (Sweden)

    Ying Jin

    2016-09-01

    Full Text Available Abstract Background The current study compared four differential item functioning (DIF methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic regression accounting neither person nor item clustering effect, hierarchical logistic regression accounting for person clustering effect, the testlet model accounting for the item clustering effect, and the multilevel testlet model accounting for both person and item clustering effects. The secondary goal of the current study was to evaluate the trade-off between simple models and complex models for the accuracy of DIF detection. An empirical example analyzing the 2011 TIMSS Mathematics data was also included to demonstrate the differential performances of the four DIF methods. A number of DIF analyses have been done on the TIMSS data, and rarely had these analyses accounted for the dual dependence of the data. Results Results indicated the complex models did not outperform simple models under certain conditions, especially when DIF parameters were considered in addition to significance tests. Conclusions Results of the current study could provide supporting evidence for applied researchers in selecting the appropriate DIF methods under various conditions.

  3. High-resolution Local Gravity Model of the South Pole of the Moon from GRAIL Extended Mission Data

    Science.gov (United States)

    Goossens, Sander Johannes; Sabaka, Terence J.; Nicholas, Joseph B.; Lemoine, Frank G.; Rowlands, David D.; Mazarico, Erwan; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.

    2014-01-01

    We estimated a high-resolution local gravity field model over the south pole of the Moon using data from the Gravity Recovery and Interior Laboratory's extended mission. Our solution consists of adjustments with respect to a global model expressed in spherical harmonics. The adjustments are expressed as gridded gravity anomalies with a resolution of 1/6deg by 1/6deg (equivalent to that of a degree and order 1080 model in spherical harmonics), covering a cap over the south pole with a radius of 40deg. The gravity anomalies have been estimated from a short-arc analysis using only Ka-band range-rate (KBRR) data over the area of interest. We apply a neighbor-smoothing constraint to our solution. Our local model removes striping present in the global model; it reduces the misfit to the KBRR data and improves correlations with topography to higher degrees than current global models.

  4. On the use of logarithmic scales for analysis of diffraction data.

    Science.gov (United States)

    Urzhumtsev, Alexandre; Afonine, Pavel V; Adams, Paul D

    2009-12-01

    Predictions of the possible model parameterization and of the values of model characteristics such as R factors are important for macromolecular refinement and validation protocols. One of the key parameters defining these and other values is the resolution of the experimentally measured diffraction data. The higher the resolution, the larger the number of diffraction data N(ref), the larger its ratio to the number N(at) of non-H atoms, the more parameters per atom can be used for modelling and the more precise and detailed a model can be obtained. The ratio N(ref)/N(at) was calculated for models deposited in the Protein Data Bank as a function of the resolution at which the structures were reported. The most frequent values for this distribution depend essentially linearly on resolution when the latter is expressed on a uniform logarithmic scale. This defines simple analytic formulae for the typical Matthews coefficient and for the typically allowed number of parameters per atom for crystals diffracting to a given resolution. This simple dependence makes it possible in many cases to estimate the expected resolution of the experimental data for a crystal with a given Matthews coefficient. When expressed using the same logarithmic scale, the most frequent values for R and R(free) factors and for their difference are also essentially linear across a large resolution range. The minimal R-factor values are practically constant at resolutions better than 3 A, below which they begin to grow sharply. This simple dependence on the resolution allows the prediction of expected R-factor values for unknown structures and may be used to guide model refinement and validation.

  5. The fusion of satellite and UAV data: simulation of high spatial resolution band

    Science.gov (United States)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  6. Integrating Landsat Data and High-Resolution Imagery for Applied Conservation Assessment of Forest Cover in Latin American Heterogenous Landscapes

    Science.gov (United States)

    Thomas, N.; Rueda, X.; Lambin, E.; Mendenhall, C. D.

    2012-12-01

    Large intact forested regions of the world are known to be critical to maintaining Earth's climate, ecosystem health, and human livelihoods. Remote sensing has been successfully implemented as a tool to monitor forest cover and landscape dynamics over broad regions. Much of this work has been done using coarse resolution sensors such as AVHRR and MODIS in combination with moderate resolution sensors, particularly Landsat. Finer scale analysis of heterogeneous and fragmented landscapes is commonly performed with medium resolution data and has had varying success depending on many factors including the level of fragmentation, variability of land cover types, patch size, and image availability. Fine scale tree cover in mixed agricultural areas can have a major impact on biodiversity and ecosystem sustainability but may often be inadequately captured with the global to regional (coarse resolution and moderate resolution) satellite sensors and processing techniques widely used to detect land use and land cover changes. This study investigates whether advanced remote sensing methods are able to assess and monitor percent tree canopy cover in spatially complex human-dominated agricultural landscapes that prove challenging for traditional mapping techniques. Our study areas are in high altitude, mixed agricultural coffee-growing regions in Costa Rica and the Colombian Andes. We applied Random Forests regression tree analysis to Landsat data along with additional spectral, environmental, and spatial variables to predict percent tree canopy cover at 30m resolution. Image object-based texture, shape, and neighborhood metrics were generated at the Landsat scale using eCognition and included in the variable suite. Training and validation data was generated using high resolution imagery from digital aerial photography at 1m to 2.5 m resolution. Our results are promising with Pearson's correlation coefficients between observed and predicted percent tree canopy cover of .86 (Costa

  7. Anthropogenic heat flux: advisable spatial resolutions when input data are scarce

    Science.gov (United States)

    Gabey, A. M.; Grimmond, C. S. B.; Capel-Timms, I.

    2018-02-01

    Anthropogenic heat flux (QF) may be significant in cities, especially under low solar irradiance and at night. It is of interest to many practitioners including meteorologists, city planners and climatologists. QF estimates at fine temporal and spatial resolution can be derived from models that use varying amounts of empirical data. This study compares simple and detailed models in a European megacity (London) at 500 m spatial resolution. The simple model (LQF) uses spatially resolved population data and national energy statistics. The detailed model (GQF) additionally uses local energy, road network and workday population data. The Fractions Skill Score (FSS) and bias are used to rate the skill with which the simple model reproduces the spatial patterns and magnitudes of QF, and its sub-components, from the detailed model. LQF skill was consistently good across 90% of the city, away from the centre and major roads. The remaining 10% contained elevated emissions and "hot spots" representing 30-40% of the total city-wide energy. This structure was lost because it requires workday population, spatially resolved building energy consumption and/or road network data. Daily total building and traffic energy consumption estimates from national data were within ± 40% of local values. Progressively coarser spatial resolutions to 5 km improved skill for total QF, but important features (hot spots, transport network) were lost at all resolutions when residential population controlled spatial variations. The results demonstrate that simple QF models should be applied with conservative spatial resolution in cities that, like London, exhibit time-varying energy use patterns.

  8. Higher Resolution for Water Resources Studies

    Science.gov (United States)

    Dumenil-Gates, L.

    2009-12-01

    The Earth system science community is providing an increasing range of science results for the benefit of achieving the Millennium Development Goals. In addressing questions such as reducing poverty and hunger, achieving sustainable global development, or by defining adaptation strategies for climate change, one of the key issues will be the quantitative description and understanding of the global water cycle, which will allow useful projections of available future water resources for several decades ahead. The quantities of global water cycle elements that we observe today - and deal with in hydrologic and atmospheric modeling - are already very different from the natural flows as human influence on the water cycle by storage, consumption and edifice has been going on for millennia, and climate change is expected to add more uncertainty. In this case Tony Blair’s comment that perhaps the most worrying problem is climate change does not cover the full story. We shall also have to quantify how the human demand for water resources and alterations of the various elements of the water cycle may proceed in the future: will there be enough of the precious water resource to sustain current and future demands by the various sectors involved? The topics that stakeholders and decision makers concerned with managing water resources are interested in cover a variety of human uses such as agriculture, energy production, ecological flow requirements to sustain biodiversity and ecosystem services, or human cultural aspects, recreation and human well-being - all typically most relevant at the regional or local scales, this being quite different from the relatively large-scale that the IPCC assessment addresses. Halfway through the Millennium process, the knowledge base of the global water cycle is still limited. The sustainability of regional water resources is best assessed through a research program that combines high-resolution climate and hydrologic models for expected

  9. Practical Programming with Higher-Order Encodings and Dependent Types

    DEFF Research Database (Denmark)

    Poswolsky, Adam; Schürmann, Carsten

    2008-01-01

    , tedious, and error-prone. In this paper, we describe the underlying calculus of Delphin. Delphin is a fully implemented functional-programming language supporting reasoning over higher-order encodings and dependent types, while maintaining the benefits of HOAS. More specifically, just as representations...... for instantiation from those that will remain uninstantiated, utilizing a variation of Miller and Tiu’s ∇-quantifier [1]....

  10. Novel ultrahigh resolution data acquisition and image reconstruction for multi-detector row CT

    International Nuclear Information System (INIS)

    Flohr, T. G.; Stierstorfer, K.; Suess, C.; Schmidt, B.; Primak, A. N.; McCollough, C. H.

    2007-01-01

    We present and evaluate a special ultrahigh resolution mode providing considerably enhanced spatial resolution both in the scan plane and in the z-axis direction for a routine medical multi-detector row computed tomography (CT) system. Data acquisition is performed by using a flying focal spot both in the scan plane and in the z-axis direction in combination with tantalum grids that are inserted in front of the multi-row detector to reduce the aperture of the detector elements both in-plane and in the z-axis direction. The dose utilization of the system for standard applications is not affected, since the grids are moved into place only when needed and are removed for standard scanning. By means of this technique, image slices with a nominal section width of 0.4 mm (measured full width at half maximum=0.45 mm) can be reconstructed in spiral mode on a CT system with a detector configuration of 32x0.6 mm. The measured 2% value of the in-plane modulation transfer function (MTF) is 20.4 lp/cm, the measured 2% value of the longitudinal (z axis) MTF is 21.5 lp/cm. In a resolution phantom with metal line pair test patterns, spatial resolution of 20 lp/cm can be demonstrated both in the scan plane and along the z axis. This corresponds to an object size of 0.25 mm that can be resolved. The new mode is intended for ultrahigh resolution bone imaging, in particular for wrists, joints, and inner ear studies, where a higher level of image noise due to the reduced aperture is an acceptable trade-off for the clinical benefit brought about by the improved spatial resolution

  11. Sensitivity of Global Methane Bayesian Inversion to Surface Observation Data Sets and Chemical-Transport Model Resolution

    Science.gov (United States)

    Lew, E. J.; Butenhoff, C. L.; Karmakar, S.; Rice, A. L.; Khalil, A. K.

    2017-12-01

    Methane is the second most important greenhouse gas after carbon dioxide. In efforts to control emissions, a careful examination of the methane budget and source strengths is required. To determine methane surface fluxes, Bayesian methods are often used to provide top-down constraints. Inverse modeling derives unknown fluxes using observed methane concentrations, a chemical transport model (CTM) and prior information. The Bayesian inversion reduces prior flux uncertainties by exploiting information content in the data. While the Bayesian formalism produces internal error estimates of source fluxes, systematic or external errors that arise from user choices in the inversion scheme are often much larger. Here we examine model sensitivity and uncertainty of our inversion under different observation data sets and CTM grid resolution. We compare posterior surface fluxes using the data product GLOBALVIEW-CH4 against the event-level molar mixing ratio data available from NOAA. GLOBALVIEW-CH4 is a collection of CH4 concentration estimates from 221 sites, collected by 12 laboratories, that have been interpolated and extracted to provide weekly records from 1984-2008. Differently, the event-level NOAA data records methane mixing ratios field measurements from 102 sites, containing sampling frequency irregularities and gaps in time. Furthermore, the sampling platform types used by the data sets may influence the posterior flux estimates, namely fixed surface, tower, ship and aircraft sites. To explore the sensitivity of the posterior surface fluxes to the observation network geometry, inversions composed of all sites, only aircraft, only ship, only tower and only fixed surface sites, are performed and compared. Also, we investigate the sensitivity of the error reduction associated with the resolution of the GEOS-Chem simulation (4°×5° vs 2°×2.5°) used to calculate the response matrix. Using a higher resolution grid decreased the model-data error at most sites, thereby

  12. Fuel type characterization based on coarse resolution MODIS satellite data

    Directory of Open Access Journals (Sweden)

    Lanorte A

    2007-01-01

    Full Text Available Fuel types is one of the most important factors that should be taken into consideration for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. In the present study, forest fuel mapping is considered from a remote sensing perspective. The purpose is to delineate forest types by exploring the use of coarse resolution satellite remote sensing MODIS imagery. In order to ascertain how well MODIS data can provide an exhaustive classification of fuel properties a sample area characterized by mixed vegetation covers and complex topography was analysed. The study area is located in the South of Italy. Fieldwork fuel type recognitions, performed before, after and during the acquisition of remote sensing MODIS data, were used as ground-truth dataset to assess the obtained results. The method comprised the following three steps: (I adaptation of Prometheus fuel types for obtaining a standardization system useful for remotely sensed classification of fuel types and properties in the considered Mediterranean ecosystems; (II model construction for the spectral characterization and mapping of fuel types based on two different approach, maximum likelihood (ML classification algorithm and spectral Mixture Analysis (MTMF; (III accuracy assessment for the performance evaluation based on the comparison of MODIS-based results with ground-truth. Results from our analyses showed that the use of remotely sensed MODIS data provided a valuable characterization and mapping of fuel types being that the achieved classification accuracy was higher than 73% for ML classifier and higher than 83% for MTMF.

  13. MULTIVARIATE CURVE RESOLUTION OF NMR SPECTROSCOPY METABONOMIC DATA

    Science.gov (United States)

    Sandia National Laboratories is working with the EPA to evaluate and develop mathematical tools for analysis of the collected NMR spectroscopy data. Initially, we have focused on the use of Multivariate Curve Resolution (MCR) also known as molecular factor analysis (MFA), a tech...

  14. Variational data assimilation system with nesting model for high resolution ocean circulation

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Yoichi; Igarashi, Hiromichi; Hiyoshi, Yoshimasa; Sasaki, Yuji; Wakamatsu, Tsuyoshi; Awaji, Toshiyuki [Center for Earth Information Science and Technology, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-Ku, Yokohama 236-0001 (Japan); In, Teiji [Japan Marine Science Foundation, 4-24, Minato-cho, Mutsu, Aomori, 035-0064 (Japan); Nakada, Satoshi [Graduate School of Maritime Science, Kobe University, 5-1-1, Fukae-minamimachi, Higashinada-Ku, Kobe, 658-0022 (Japan); Nishina, Kei, E-mail: ishikaway@jamstec.go.jp [Graduate School of Science, Kyoto University, Kitashirakawaoiwake-cho, Sakyo-Ku, Kyoto, 606-8502 (Japan)

    2015-10-15

    To obtain the high-resolution analysis fields for ocean circulation, a new incremental approach is developed using a four-dimensional variational data assimilation system with nesting models. The results show that there are substantial biases when using a classical method combined with data assimilation and downscaling, caused by different dynamics resulting from the different resolutions of the models used within the nesting models. However, a remarkable reduction in biases of the low-resolution model relative to the high-resolution model was observed using our new approach in narrow strait regions, such as the Tsushima and Tsugaru straits, where the difference in the dynamics represented by the high- and low-resolution models is substantial. In addition, error reductions are demonstrated in the downstream region of these narrow channels associated with the propagation of information through the model dynamics. (paper)

  15. Analysis of bovine milk caseins on organic monolithic columns: an integrated capillary liquid chromatography-high resolution mass spectrometry approach for the study of time-dependent casein degradation.

    Science.gov (United States)

    Pierri, Giuseppe; Kotoni, Dorina; Simone, Patrizia; Villani, Claudio; Pepe, Giacomo; Campiglia, Pietro; Dugo, Paola; Gasparrini, Francesco

    2013-10-25

    Casein proteins constitute approximately 80% of the proteins present in bovine milk and account for many of its nutritional and technological properties. The analysis of the casein fraction in commercially available pasteurized milk and the study of its time-dependent degradation is of considerable interest in the agro-food industry. Here we present new analytical methods for the study of caseins in fresh and expired bovine milk, based on the use of lab-made capillary organic monolithic columns. An integrated capillary high performance liquid chromatography and high-resolution mass spectrometry (Cap-LC-HRMS) approach was developed, exploiting the excellent resolution, permeability and biocompatibility of organic monoliths, which is easily adaptable to the analysis of intact proteins. The resolution obtained on the lab-made Protein-Cap-RP-Lauryl-γ-Monolithic column (270 mm × 0.250 mm length × internal diameter, L × I.D.) in the analysis of commercial standard caseins (αS-CN, β-CN and κ-CN) through Cap-HPLC-UV was compared to the one observe using two packed capillary C4 columns, the ACE C4 (3 μm, 150 mm × 0.300 mm, L × I.D.) and the Jupiter C4 column (5 μm, 150 mm × 0.300 mm, L × I.D.). Thanks to the higher resolution observed, the monolithic capillary column was chosen for the successive degradation studies of casein fractions extracted from bovine milk 1-4 weeks after expiry date. The comparison of the UV chromatographic profiles of skim, semi-skim and whole milk showed a major stability of whole milk towards time-dependent degradation of caseins, which was further sustained by high-resolution analysis on a 50-cm long monolithic column using a 120-min time gradient. Contemporarily, the exact monoisotopic and average molecular masses of intact αS-CN and β-CN protein standards were obtained through high resolution mass spectrometry and used for casein identification in Cap-LC-HRMS analysis. Finally, the proteolytic degradation of β-CN in skim milk

  16. Entity resolution in the web of data

    CERN Document Server

    Christophides, Vassilis; Stefanidis, Kostas

    2015-01-01

    In recent years, several knowledge bases have been built to enable large-scale knowledge sharing, but also an entity-centric Web search, mixing both structured data and text querying. These knowledge bases offer machine-readable descriptions of real-world entities, e.g., persons, places, published on the Web as Linked Data. However, due to the different information extraction tools and curation policies employed by knowledge bases, multiple, complementary and sometimes conflicting descriptions of the same real-world entities may be provided. Entity resolution aims to identify different descrip

  17. Geo-statistical model of Rainfall erosivity by using high temporal resolution precipitation data in Europe

    Science.gov (United States)

    Panagos, Panos; Ballabio, Cristiano; Borrelli, Pasquale; Meusburger, Katrin; Alewell, Christine

    2015-04-01

    Rainfall erosivity (R-factor) is among the 6 input factors in estimating soil erosion risk by using the empirical Revised Universal Soil Loss Equation (RUSLE). R-factor is a driving force for soil erosion modelling and potentially can be used in flood risk assessments, landslides susceptibility, post-fire damage assessment, application of agricultural management practices and climate change modelling. The rainfall erosivity is extremely difficult to model at large scale (national, European) due to lack of high temporal resolution precipitation data which cover long-time series. In most cases, R-factor is estimated based on empirical equations which take into account precipitation volume. The Rainfall Erosivity Database on the European Scale (REDES) is the output of an extensive data collection of high resolution precipitation data in the 28 Member States of the European Union plus Switzerland taking place during 2013-2014 in collaboration with national meteorological/environmental services. Due to different temporal resolutions of the data (5, 10, 15, 30, 60 minutes), conversion equations have been applied in order to homogenise the database at 30-minutes interval. The 1,541 stations included in REDES have been interpolated using the Gaussian Process Regression (GPR) model using as covariates the climatic data (monthly precipitation, monthly temperature, wettest/driest month) from WorldClim Database, Digital Elevation Model and latitude/longitude. GPR has been selected among other candidate models (GAM, Regression Kriging) due the best performance both in cross validation (R2=0.63) and in fitting dataset (R2=0.72). The highest uncertainty has been noticed in North-western Scotland, North Sweden and Finland due to limited number of stations in REDES. Also, in highlands such as Alpine arch and Pyrenees the diversity of environmental features forced relatively high uncertainty. The rainfall erosivity map of Europe available at 500m resolution plus the standard error

  18. Condition-dependence, pleiotropy and the handicap principle of sexual selection in melanin-based colouration.

    Science.gov (United States)

    Roulin, Alexandre

    2016-05-01

    The signalling function of melanin-based colouration is debated. Sexual selection theory states that ornaments should be costly to produce, maintain, wear or display to signal quality honestly to potential mates or competitors. An increasing number of studies supports the hypothesis that the degree of melanism covaries with aspects of body condition (e.g. body mass or immunity), which has contributed to change the initial perception that melanin-based colour ornaments entail no costs. Indeed, the expression of many (but not all) melanin-based colour traits is weakly sensitive to the environment but strongly heritable suggesting that these colour traits are relatively cheap to produce and maintain, thus raising the question of how such colour traits could signal quality honestly. Here I review the production, maintenance and wearing/displaying costs that can generate a correlation between melanin-based colouration and body condition, and consider other evolutionary mechanisms that can also lead to covariation between colour and body condition. Because genes controlling melanic traits can affect numerous phenotypic traits, pleiotropy could also explain a linkage between body condition and colouration. Pleiotropy may result in differently coloured individuals signalling different aspects of quality that are maintained by frequency-dependent selection or local adaptation. Colouration may therefore not signal absolute quality to potential mates or competitors (e.g. dark males may not achieve a higher fitness than pale males); otherwise genetic variation would be rapidly depleted by directional selection. As a consequence, selection on heritable melanin-based colouration may not always be directional, but mate choice may be conditional to environmental conditions (i.e. context-dependent sexual selection). Despite the interest of evolutionary biologists in the adaptive value of melanin-based colouration, its actual role in sexual selection is still poorly understood.

  19. A novel fast gas chromatography method for higher time resolution measurements of speciated monoterpenes in air

    Science.gov (United States)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2014-05-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in ambient air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C9-C15 BVOC composition of single plant emissions may be characterised within a 14.5 min analysis time. Moreover, in-situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an 11.7 min chromatographic separation time (increasing to 19.7 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). These analysis times potentially allow for a twofold to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in-situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC (OBVOC) linalool in ambient air. During this field deployment within a suburban forest

  20. Effect of radar rainfall time resolution on the predictive capability of a distributed hydrologic model

    Science.gov (United States)

    Atencia, A.; Llasat, M. C.; Garrote, L.; Mediero, L.

    2010-10-01

    The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.

  1. Applications of Data Mining in Higher Education

    OpenAIRE

    Monika Goyal; Rajan Vohra

    2012-01-01

    Data analysis plays an important role for decision support irrespective of type of industry like any manufacturing unit and educations system. There are many domains in which data mining techniques plays an important role. This paper proposes the use of data mining techniques to improve the efficiency of higher education institution. If data mining techniques such as clustering, decision tree and association are applied to higher education processes, it would help to improve students performa...

  2. Image processing and resolution restoration of fast-neutron hodoscope data

    International Nuclear Information System (INIS)

    Rhodes, E.A.; DeVolpi, A.

    1982-01-01

    The fast-neutron hodoscope is a cineradiographic device that monitors fuel motion within thick opaque test capsules during radiation transients at the Transient Reactor Test Facility reactor which simulate accident conditions in power reactors. By means of a collimator and detector array, emissive neutron radiographic digital data is collected in time-resolved and static (scan) modes. Data is digitally reconstructed into raidographic images and used directly for quantitative analysis. Spatial resolution is adequate in most cases, but is marginal for a few experiments. Collimator repositioning, scan increment size-reduction, and deconvolution algorithms are being applied to improve resolution of existing collimators

  3. Selective saturation method for EPR dosimetry with tooth enamel

    International Nuclear Information System (INIS)

    Ignatiev, E.A.; Romanyukha, A.A.; Koshta, A.A.; Wieser, A.

    1996-01-01

    The method of selective saturation is based on the difference in the microwave (mw) power dependence of the background and radiation induced EPR components of the tooth enamel spectrum. The subtraction of the EPR spectrum recorded at low mw power from that recorded at higher mw power provides a considerable reduction of the background component in the spectrum. The resolution of the EPR spectrum could be improved 10-fold, however simultaneously the signal-to-noise ratio was found to be reduced twice. A detailed comparative study of reference samples with known absorbed doses was performed to demonstrate the advantage of the method. The application of the selective saturation method for EPR dosimetry with tooth enamel reduced the lower limit of EPR dosimetry to about 100 mGy. (author)

  4. Image Quality in High-resolution and High-cadence Solar Imaging

    Science.gov (United States)

    Denker, C.; Dineva, E.; Balthasar, H.; Verma, M.; Kuckein, C.; Diercke, A.; González Manrique, S. J.

    2018-03-01

    Broad-band imaging and even imaging with a moderate bandpass (about 1 nm) provides a photon-rich environment, where frame selection (lucky imaging) becomes a helpful tool in image restoration, allowing us to perform a cost-benefit analysis on how to design observing sequences for imaging with high spatial resolution in combination with real-time correction provided by an adaptive optics (AO) system. This study presents high-cadence (160 Hz) G-band and blue continuum image sequences obtained with the High-resolution Fast Imager (HiFI) at the 1.5-meter GREGOR solar telescope, where the speckle-masking technique is used to restore images with nearly diffraction-limited resolution. The HiFI employs two synchronized large-format and high-cadence sCMOS detectors. The median filter gradient similarity (MFGS) image-quality metric is applied, among others, to AO-corrected image sequences of a pore and a small sunspot observed on 2017 June 4 and 5. A small region of interest, which was selected for fast-imaging performance, covered these contrast-rich features and their neighborhood, which were part of Active Region NOAA 12661. Modifications of the MFGS algorithm uncover the field- and structure-dependency of this image-quality metric. However, MFGS still remains a good choice for determining image quality without a priori knowledge, which is an important characteristic when classifying the huge number of high-resolution images contained in data archives. In addition, this investigation demonstrates that a fast cadence and millisecond exposure times are still insufficient to reach the coherence time of daytime seeing. Nonetheless, the analysis shows that data acquisition rates exceeding 50 Hz are required to capture a substantial fraction of the best seeing moments, significantly boosting the performance of post-facto image restoration.

  5. LiDAR Remote Sensing of Forest Structure and GPS Telemetry Data Provide Insights on Winter Habitat Selection of European Roe Deer

    Directory of Open Access Journals (Sweden)

    Michael Ewald

    2014-06-01

    Full Text Available The combination of GPS-Telemetry and resource selection functions is widely used to analyze animal habitat selection. Rapid large-scale assessment of vegetation structure allows bridging the requirements of habitat selection studies on grain size and extent, particularly in forest habitats. For roe deer, the cold period in winter forces individuals to optimize their trade off in searching for food and shelter. We analyzed the winter habitat selection of roe deer (Capreolus capreolus in a montane forest landscape combining estimates of vegetation cover in three different height strata, derived from high resolution airborne Laser-scanning (LiDAR, Light detection and ranging, and activity data from GPS telemetry. Specifically, we tested the influence of temperature, snow height, and wind speed on site selection, differentiating between active and resting animals using mixed-effects conditional logistic regression models in a case-control design. Site selection was best explained by temperature deviations from hourly means, snow height, and activity status of the animals. Roe deer tended to use forests of high canopy cover more frequently with decreasing temperature, and when snow height exceeded 0.6 m. Active animals preferred lower canopy cover, but higher understory cover. Our approach demonstrates the potential of LiDAR measures for studying fine scale habitat selection in complex three-dimensional habitats, such as forests.

  6. Super-resolution fluorescence imaging of nanoimprinted polymer patterns by selective fluorophore adsorption combined with redox switching

    KAUST Repository

    Yabiku, Y.; Kubo, S.; Nakagawa, M.; Vacha, M.; Habuchi, Satoshi

    2013-01-01

    We applied a super-resolution fluorescence imaging based on selective adsorption and redox switching of the fluorescent dye molecules for studying polymer nanostructures. We demonstrate that nano-scale structures of polymer thin films can

  7. Clickstream data yields high-resolution maps of science

    Energy Technology Data Exchange (ETDEWEB)

    Bollen, Johan [Los Alamos National Laboratory; Van De Sompel, Herbert [Los Alamos National Laboratory; Hagberg, Aric [Los Alamos National Laboratory; Bettencourt, Luis [Los Alamos National Laboratory; Chute, Ryan [Los Alamos National Laboratory; Rodriguez, Marko A [Los Alamos National Laboratory; Balakireva, Lyudmila [Los Alamos National Laboratory

    2009-01-01

    Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantagees of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science.

  8. Best Technology Practices of Conflict Resolution Specialists: A Case Study of Online Dispute Resolution at United States Universities

    Science.gov (United States)

    Law, Kimberli Marie

    2013-01-01

    The purpose of this study was to remedy the paucity of knowledge about higher education's conflict resolution practice of online dispute resolution by providing an in-depth description of mediator and instructor online practices. Telephone interviews were used as the primary data collection method. Eleven interview questions were relied upon to…

  9. The coupling of high-speed high resolution experimental data and LES through data assimilation techniques

    Science.gov (United States)

    Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.

    2017-11-01

    Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.

  10. High resolution and high speed positron emission tomography data acquisition

    International Nuclear Information System (INIS)

    Burgiss, S.G.; Byars, L.G.; Jones, W.F.; Casey, M.E.

    1986-01-01

    High resolution positron emission tomography (PET) requires many detectors. Thus, data collection systems for PET must have high data rates, wide data paths, and large memories to histogram the events. This design uses the VMEbus to cost effectively provide these features. It provides for several modes of operation including real time sorting, list mode data storage, and replay of stored list mode data

  11. Feasibility analysis of high resolution tissue image registration using 3-D synthetic data

    Directory of Open Access Journals (Sweden)

    Yachna Sharma

    2011-01-01

    Full Text Available Background: Registration of high-resolution tissue images is a critical step in the 3D analysis of protein expression. Because the distance between images (~4-5μm thickness of a tissue section is nearly the size of the objects of interest (~10-20μm cancer cell nucleus, a given object is often not present in both of two adjacent images. Without consistent correspondence of objects between images, registration becomes a difficult task. This work assesses the feasibility of current registration techniques for such images. Methods: We generated high resolution synthetic 3-D image data sets emulating the constraints in real data. We applied multiple registration methods to the synthetic image data sets and assessed the registration performance of three techniques (i.e., mutual information (MI, kernel density estimate (KDE method [1], and principal component analysis (PCA at various slice thicknesses (with increments of 1μm in order to quantify the limitations of each method. Results: Our analysis shows that PCA, when combined with the KDE method based on nuclei centers, aligns images corresponding to 5μm thick sections with acceptable accuracy. We also note that registration error increases rapidly with increasing distance between images, and that the choice of feature points which are conserved between slices improves performance. Conclusions: We used simulation to help select appropriate features and methods for image registration by estimating best-case-scenario errors for given data constraints in histological images. The results of this study suggest that much of the difficulty of stained tissue registration can be reduced to the problem of accurately identifying feature points, such as the center of nuclei.

  12. Data-driven gating in PET: Influence of respiratory signal noise on motion resolution.

    Science.gov (United States)

    Büther, Florian; Ernst, Iris; Frohwein, Lynn Johann; Pouw, Joost; Schäfers, Klaus Peter; Stegger, Lars

    2018-05-21

    Data-driven gating (DDG) approaches for positron emission tomography (PET) are interesting alternatives to conventional hardware-based gating methods. In DDG, the measured PET data themselves are utilized to calculate a respiratory signal, that is, subsequently used for gating purposes. The success of gating is then highly dependent on the statistical quality of the PET data. In this study, we investigate how this quality determines signal noise and thus motion resolution in clinical PET scans using a center-of-mass-based (COM) DDG approach, specifically with regard to motion management of target structures in future radiotherapy planning applications. PET list mode datasets acquired in one bed position of 19 different radiotherapy patients undergoing pretreatment [ 18 F]FDG PET/CT or [ 18 F]FDG PET/MRI were included into this retrospective study. All scans were performed over a region with organs (myocardium, kidneys) or tumor lesions of high tracer uptake and under free breathing. Aside from the original list mode data, datasets with progressively decreasing PET statistics were generated. From these, COM DDG signals were derived for subsequent amplitude-based gating of the original list mode file. The apparent respiratory shift d from end-expiration to end-inspiration was determined from the gated images and expressed as a function of signal-to-noise ratio SNR of the determined gating signals. This relation was tested against additional 25 [ 18 F]FDG PET/MRI list mode datasets where high-precision MR navigator-like respiratory signals were available as reference signal for respiratory gating of PET data, and data from a dedicated thorax phantom scan. All original 19 high-quality list mode datasets demonstrated the same behavior in terms of motion resolution when reducing the amount of list mode events for DDG signal generation. Ratios and directions of respiratory shifts between end-respiratory gates and the respective nongated image were constant over all

  13. Hydrological Applications of a High-Resolution Radar Precipitation Data Base for Sweden

    Science.gov (United States)

    Olsson, Jonas; Berg, Peter; Norin, Lars; Simonsson, Lennart

    2017-04-01

    rainfall-runoff process in large, slow river basins, which traditionally has been the main focus in the national forecasting, an hourly time step (or preferably even shorter) is required to simulate the flow in fast-responding basins. At the daily scale, the PTHBV product is used for model initialization prior to the forecasts but with its daily resolution it is not applicable at the hourly scale. For this purpose, a real-time version of HIPRAD has been developed which is currently running operationally. HIPRAD is also being used for historical simulations with an hourly time step, which is important for e.g. water quality assessment. Finally, we will use HIPRAD to gain an improved knowledge of the short-duration precipitation climate in Sweden. Currently there are many open issues with respect to e.g. geographical differences, spatial correlations and areal extremes. Here we will show and discuss selected results from the ongoing development and validation of HIPRAD as well as its various applications for hydrological forecasting and risk assessment. Further, web resources containing radar-based observation and forecasting for hydrological applications will be demonstrated. Finally, some future research directions will be outlined. Fast responding hydrological catchments require fine spatial and temporal resolution of the precipitation input data to provide realistic results.

  14. On the use of logarithmic scales for analysis of diffraction data

    Energy Technology Data Exchange (ETDEWEB)

    Urzhumtsev, Alexandre, E-mail: sacha@igbmc.fr [IGBMC, CNRS-INSERM-UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France); Physics Department, University of Nancy, BP 239, Faculté des Sciences et des Technologies, 54506 Vandoeuvre-lès-Nancy (France); Afonine, Pavel V. [Lawrence Berkeley National Laboratory, One Cyclotron Road, BLDG 64R0121, Berkeley, CA 94720 (United States); Adams, Paul D. [Lawrence Berkeley National Laboratory, One Cyclotron Road, BLDG 64R0121, Berkeley, CA 94720 (United States); Department of Bioengineering, University of California Berkeley, Berkeley, CA 94720 (United States); IGBMC, CNRS-INSERM-UdS, 1 Rue Laurent Fries, BP 10142, 67404 Illkirch (France)

    2009-12-01

    Conventional and free R factors and their difference, as well as the ratio of the number of measured reflections to the number of atoms in the crystal, were studied as functions of the resolution at which the structures were reported. When the resolution was taken uniformly on a logarithmic scale, the most frequent values of these functions were quasi-linear over a large resolution range. Predictions of the possible model parameterization and of the values of model characteristics such as R factors are important for macromolecular refinement and validation protocols. One of the key parameters defining these and other values is the resolution of the experimentally measured diffraction data. The higher the resolution, the larger the number of diffraction data N{sub ref}, the larger its ratio to the number N{sub at} of non-H atoms, the more parameters per atom can be used for modelling and the more precise and detailed a model can be obtained. The ratio N{sub ref}/N{sub at} was calculated for models deposited in the Protein Data Bank as a function of the resolution at which the structures were reported. The most frequent values for this distribution depend essentially linearly on resolution when the latter is expressed on a uniform logarithmic scale. This defines simple analytic formulae for the typical Matthews coefficient and for the typically allowed number of parameters per atom for crystals diffracting to a given resolution. This simple dependence makes it possible in many cases to estimate the expected resolution of the experimental data for a crystal with a given Matthews coefficient. When expressed using the same logarithmic scale, the most frequent values for R and R{sub free} factors and for their difference are also essentially linear across a large resolution range. The minimal R-factor values are practically constant at resolutions better than 3 Å, below which they begin to grow sharply. This simple dependence on the resolution allows the prediction of

  15. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data

    KAUST Repository

    Castruccio, Stefano

    2018-01-23

    Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)—coarser or larger spatial units—rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke.

  16. Object-Based Paddy Rice Mapping Using HJ-1A/B Data and Temporal Features Extracted from Time Series MODIS NDVI Data.

    Science.gov (United States)

    Singha, Mrinal; Wu, Bingfang; Zhang, Miao

    2016-12-22

    Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification.

  17. Propagation of Data Dependency through Distributed Cooperating Processes

    Science.gov (United States)

    1988-09-01

    12 The External Data Dependency Analyzer ( EDDA ) .................................................. 12 The new EPL...47 EDDA Patch Files for the Dining Philosophers Example [Figure 23] ................... 49 L im itations...dependencies is evident. The External Data Dependency Analyzer ( EDDA ) The EDDA derives external data dependencies by performing two levels of analysis

  18. Can Low-Resolution Airborne Laser Scanning Data Be Used to Model Stream Rating Curves?

    Directory of Open Access Journals (Sweden)

    Steve W. Lyon

    2015-03-01

    Full Text Available This pilot study explores the potential of using low-resolution (0.2 points/m2 airborne laser scanning (ALS-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2 ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries. This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  19. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    Science.gov (United States)

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  20. An Ultra-high Resolution Synthetic Precipitation Data for Ungauged Sites

    Science.gov (United States)

    Kim, Hong-Joong; Choi, Kyung-Min; Oh, Jai-Ho

    2018-05-01

    Despite the enormous damage caused by record heavy rainfall, the amount of precipitation in areas without observation points cannot be known precisely. One way to overcome these difficulties is to estimate meteorological data at ungauged sites. In this study, we have used observation data over Seoul city to calculate high-resolution (250-meter resolution) synthetic precipitation over a 10-year (2005-2014) period. Furthermore, three cases are analyzed by evaluating the rainfall intensity and performing statistical analysis over the 10-year period. In the case where the typhoon "Meari" passed to the west coast during 28-30 June 2011, the Pearson correlation coefficient was 0.93 for seven validation points, which implies that the temporal correlation between the observed precipitation and synthetic precipitation was very good. It can be confirmed that the time series of observation and synthetic precipitation in the period almost completely matches the observed rainfall. On June 28-29, 2011, the estimation of 10 to 30 mm h-1 of continuous strong precipitation was correct. In addition, it is shown that the synthetic precipitation closely follows the observed precipitation for all three cases. Statistical analysis of 10 years of data reveals a very high correlation coefficient between synthetic precipitation and observed rainfall (0.86). Thus, synthetic precipitation data show good agreement with the observations. Therefore, the 250-m resolution synthetic precipitation amount calculated in this study is useful as basic data in weather applications, such as urban flood detection.

  1. High resolution weather data for urban hydrological modelling and impact assessment, ICT requirements and future challenges

    Science.gov (United States)

    ten Veldhuis, Marie-claire; van Riemsdijk, Birna

    2013-04-01

    Hydrological analysis of urban catchments requires high resolution rainfall and catchment information because of the small size of these catchments, high spatial variability of the urban fabric, fast runoff processes and related short response times. Rainfall information available from traditional radar and rain gauge networks does no not meet the relevant scales of urban hydrology. A new type of weather radars, based on X-band frequency and equipped with Doppler and dual polarimetry capabilities, promises to provide more accurate rainfall estimates at the spatial and temporal scales that are required for urban hydrological analysis. Recently, the RAINGAIN project was started to analyse the applicability of this new type of radars in the context of urban hydrological modelling. In this project, meteorologists and hydrologists work closely together in several stages of urban hydrological analysis: from the acquisition procedure of novel and high-end radar products to data acquisition and processing, rainfall data retrieval, hydrological event analysis and forecasting. The project comprises of four pilot locations with various characteristics of weather radar equipment, ground stations, urban hydrological systems, modelling approaches and requirements. Access to data processing and modelling software is handled in different ways in the pilots, depending on ownership and user context. Sharing of data and software among pilots and with the outside world is an ongoing topic of discussion. The availability of high resolution weather data augments requirements with respect to the resolution of hydrological models and input data. This has led to the development of fully distributed hydrological models, the implementation of which remains limited by the unavailability of hydrological input data. On the other hand, if models are to be used in flood forecasting, hydrological models need to be computationally efficient to enable fast responses to extreme event conditions. This

  2. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  3. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    Science.gov (United States)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and

  4. Hyper-resolution monitoring of urban flooding with social media and crowdsourcing data

    Science.gov (United States)

    Wang, Ruo-Qian; Mao, Huina; Wang, Yuan; Rae, Chris; Shaw, Wesley

    2018-02-01

    Hyper-resolution datasets for urban flooding are rare. This problem prevents detailed flooding risk analysis, urban flooding control, and the validation of hyper-resolution numerical models. We employed social media and crowdsourcing data to address this issue. Natural Language Processing and Computer Vision techniques are applied to the data collected from Twitter and MyCoast (a crowdsourcing app). We found these big data based flood monitoring approaches can complement the existing means of flood data collection. The extracted information is validated against precipitation data and road closure reports to examine the data quality. The two data collection approaches are compared and the two data mining methods are discussed. A series of suggestions is given to improve the data collection strategy.

  5. Super-resolution imaging based on the temperature-dependent electron-phonon collision frequency effect of metal thin films

    Science.gov (United States)

    Ding, Chenliang; Wei, Jingsong; Xiao, Mufei

    2018-05-01

    We herein propose a far-field super-resolution imaging with metal thin films based on the temperature-dependent electron-phonon collision frequency effect. In the proposed method, neither fluorescence labeling nor any special properties are required for the samples. The 100 nm lands and 200 nm grooves on the Blu-ray disk substrates were clearly resolved and imaged through a laser scanning microscope of wavelength 405 nm. The spot size was approximately 0.80 μm , and the imaging resolution of 1/8 of the laser spot size was experimentally obtained. This work can be applied to the far-field super-resolution imaging of samples with neither fluorescence labeling nor any special properties.

  6. NOMINAL VALUES FOR SELECTED SOLAR AND PLANETARY QUANTITIES: IAU 2015 RESOLUTION B3

    Energy Technology Data Exchange (ETDEWEB)

    Prša, Andrej [Villanova University, Department of Astrophysics and Planetary Science, 800 Lancaster Ave., Villanova, PA 19085 (United States); Harmanec, Petr [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-180 00 Praha 8 (Czech Republic); Torres, Guillermo [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Mamajek, Eric [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627-0171 (United States); Asplund, Martin [Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611 (Australia); Capitaine, Nicole [SYRTE, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Universités, UPMC, LNE, 61 avenue de lObservatoire, F-75014 Paris (France); Christensen-Dalsgaard, Jørgen [Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); Depagne, Éric [South African Astronomical Observatory, P.O. Box 9 Observatory, Cape Town (South Africa); Haberreiter, Margit [Physikalisch-Meteorologisches Observatorium Davos/World Radiation Center, Dorfstrasse 33, Davos (Switzerland); Hekker, Saskia [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Hilton, James [US Naval Observatory, 3450 Massachusetts Ave. NW, Washington, DC 20392-5420 (United States); Kopp, Greg [Laboratory for Atmospheric and Space Physics, 1234 Innovation Drive, Boulder, CO 80303-7814 (United States); and others

    2016-08-01

    In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the time of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.

  7. NOMINAL VALUES FOR SELECTED SOLAR AND PLANETARY QUANTITIES: IAU 2015 RESOLUTION B3

    International Nuclear Information System (INIS)

    Prša, Andrej; Harmanec, Petr; Torres, Guillermo; Mamajek, Eric; Asplund, Martin; Capitaine, Nicole; Christensen-Dalsgaard, Jørgen; Depagne, Éric; Haberreiter, Margit; Hekker, Saskia; Hilton, James; Kopp, Greg

    2016-01-01

    In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the time of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.

  8. Comparison of super-resolution benefits for downsampled iages and real low-resolution data

    NARCIS (Netherlands)

    Peng, Y.; Spreeuwers, Lieuwe Jan; Gökberk, B.; Veldhuis, Raymond N.J.

    2013-01-01

    Recently, more and more researchers are exploring the benefits of super-resolution methods on low-resolution face recognition. However, often results presented are obtained on downsampled high-resolution face images. Because downsampled images are different from real images taken at low resolution,

  9. Sensitivity of GRETINA position resolution to hole mobility

    Energy Technology Data Exchange (ETDEWEB)

    Prasher, V.S. [Department of Physics, University of Massachusetts Lowell, Lowell, MA 01854 (United States); Cromaz, M. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Merchan, E.; Chowdhury, P. [Department of Physics, University of Massachusetts Lowell, Lowell, MA 01854 (United States); Crawford, H.L. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lister, C.J. [Department of Physics, University of Massachusetts Lowell, Lowell, MA 01854 (United States); Campbell, C.M.; Lee, I.Y.; Macchiavelli, A.O. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Radford, D.C. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wiens, A. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2017-02-21

    The sensitivity of the position resolution of the gamma-ray tracking array GRETINA to the hole charge-carrier mobility parameter is investigated. The χ{sup 2} results from a fit of averaged signal (“superpulse”) data exhibit a shallow minimum for hole mobilities 15% lower than the currently adopted values. Calibration data on position resolution is analyzed, together with simulations that isolate the hole mobility dependence of signal decomposition from other effects such as electronics cross-talk. The results effectively exclude hole mobility as a dominant parameter for improving the position resolution for reconstruction of gamma-ray interaction points in GRETINA.

  10. A novel Fast Gas Chromatography based technique for higher time resolution measurements of speciated monoterpenes in air

    Science.gov (United States)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2013-12-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C10-C15 BVOC composition of single plant emissions may be characterised within a ~ 14 min analysis time. Moreover, in situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an ~ 11 min chromatographic separation time (increasing to ~ 19 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). This corresponds to a two- to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC linalool in ambient air. During this field deployment within a suburban forest ~ 30 km west of central Tokyo, Japan, the

  11. Coastal and Inland Water Applications of High Resolution Optical Satellite Data from Landsat-8 and Sentinel-2

    Science.gov (United States)

    Vanhellemont, Q.

    2016-02-01

    Since the launch of Landsat-8 (L8) in 2013, a joint NASA/USGS programme, new applications of high resolution imagery for coastal and inland waters have become apparent. The optical imaging instrument on L8, the Operational Land Imager (OLI), is much improved compared to its predecessors on L5 and L7, especially with regards to SNR and digitization, and is therefore well suited for retrieving water reflectances and derived parameters such as turbidity and suspended sediment concentration. In June 2015, the European Space Agency (ESA) successfully launched a similar instrument, the MultiSpectral Imager (MSI), on board of Sentinel-2A (S2A). Imagery from both L8 and S2A are free of charge and publicly available (S2A starting at the end of 2015). Atmospheric correction schemes and processing software is under development in the EC-FP7 HIGHROC project. The spatial resolution of these instruments (10-60 m) is a great improvement over typical moderate resolution ocean colour sensors such as MODIS and MERIS (0.25 - 1 km). At higher resolution, many more lakes, rivers, ports and estuaries are spatially resolved, and can thus now be studied using satellite data, unlocking potential for mandatory monitoring e.g. under European Directives such as the Marine Strategy Framework Directive and the Water Framework Directive. We present new applications of these high resolution data, such as monitoring of offshore constructions, wind farms, sediment transport, dredging and dumping, shipping and fishing activities. The spatial variability at sub moderate resolution (0.25 - 1 km) scales can be assessed, as well as the impact of sub grid scale variability (including ships and platforms used for validation) on the moderate pixel retrieval. While the daily revisit time of the moderate resolution sensors is vastly superior to those of the high resolution satellites, at the equator respectively 16 and 10 days for L8 and S2A, the low revisit times can be partially mitigated by combining data

  12. Evaluation of downscaled, gridded climate data for the conterminous United States

    Science.gov (United States)

    Robert J. Behnke,; Stephen J. Vavrus,; Andrew Allstadt,; Thomas P. Albright,; Thogmartin, Wayne E.; Volker C. Radeloff,

    2016-01-01

    Weather and climate affect many ecological processes, making spatially continuous yet fine-resolution weather data desirable for ecological research and predictions. Numerous downscaled weather data sets exist, but little attempt has been made to evaluate them systematically. Here we address this shortcoming by focusing on four major questions: (1) How accurate are downscaled, gridded climate data sets in terms of temperature and precipitation estimates?, (2) Are there significant regional differences in accuracy among data sets?, (3) How accurate are their mean values compared with extremes?, and (4) Does their accuracy depend on spatial resolution? We compared eight widely used downscaled data sets that provide gridded daily weather data for recent decades across the United States. We found considerable differences among data sets and between downscaled and weather station data. Temperature is represented more accurately than precipitation, and climate averages are more accurate than weather extremes. The data set exhibiting the best agreement with station data varies among ecoregions. Surprisingly, the accuracy of the data sets does not depend on spatial resolution. Although some inherent differences among data sets and weather station data are to be expected, our findings highlight how much different interpolation methods affect downscaled weather data, even for local comparisons with nearby weather stations located inside a grid cell. More broadly, our results highlight the need for careful consideration among different available data sets in terms of which variables they describe best, where they perform best, and their resolution, when selecting a downscaled weather data set for a given ecological application.

  13. Personality Traits and Psychopathology in Nicotine and Opiate Dependents Using the Gateway Drug Theory

    Directory of Open Access Journals (Sweden)

    Bahareh Amirabadi

    2015-03-01

    Full Text Available Objectives: According to the gateway drug theory, tobacco use is a predisposing factor for future substance abuse. This study was conducted to compare nicotine and opiate dependents to identify the differences between their personality traits and psychopathology that makes them turn to other substances after cigarette smoking. Methods: A causal-comparative study was conducted. Three groups were randomly selected: nicotine dependents, opiate dependents and ordinary individuals (non-dependent population. Cloninger’s Temperament and Character Inventory-Revised, the Fagerstrom Test for Nicotine Dependence, Maudsley Addiction Profile, the Beck Depression Inventory, and Beck Anxiety Inventory were used to collect data. Analysis of variance was used to analyze data. Results: Opiate dependents had higher ‘novelty seeking’ and lower ‘cooperativeness’ scores as compared to the other two groups. They also had higher anxiety and depression scores than the other two groups. Discussion: Higher ‘novelty seeking’ and lower ‘cooperativeness’ scores are important personality traits predicting

  14. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei

    2013-06-24

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique nonoverlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Because the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is comparable to conventional RTM for the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved by suppressing migration artifacts, balancing reflector amplitudes, and enhancing the spatial resolution. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. © 2013 Society of Exploration Geophysicists.

  15. Wh-filler-gap dependency formation guides reflexive antecedent search

    Directory of Open Access Journals (Sweden)

    Michael eFrazier

    2015-10-01

    Full Text Available Prior studies on online sentence processing have shown that the parser can resolve non-local dependencies rapidly and accurately. This study investigates the interaction between the processing of two such non-local dependencies: wh-filler-gap dependencies (WhFGD and reflexive-antecedent dependencies. We show that reflexive-antecedent dependency resolution is sensitive to the presence of a WhFGD, and argue that the filler-gap dependency established by WhFGD resolution is selected online as the antecedent of a reflexive dependency. We investigate the processing of constructions like (1, where two NPs might be possible antecedents for the reflexive, namely which cowgirl and Mary. Even though Mary is linearly closer to the reflexive, the only grammatically licit antecedent for the reflexive is the more distant wh-NP, which cowgirl. 1. Which cowgirl did Mary expect to have injured herself due to negligence?Four eye-tracking text-reading experiments were conducted on examples like (1, differing in whether the embedded clause was non-finite (1 and 3 or finite (2 and 4, and in whether the tail of the wh-dependency intervened between the reflexive and its closest overt antecedent (1 and 2 or the wh-dependency was associated with a position earlier in the sentence (3 and 4.The results of Experiments 1 and 2 indicate the parser accesses the result of WhFGD formation during reflexive antecedent search. The resolution of a wh-dependency alters the representation that reflexive antecedent search operates over, allowing the grammatical but linearly distant antecedent to be accessed rapidly. In the absence of a long-distance WhFGD (Exp. 3 and 4, wh-NPs were not found to impact reading times of the reflexive, indicating that the parser's ability to select distant wh-NPs as reflexive antecedents crucially involves syntactic structure.

  16. Enhancing Hi-C data resolution with deep convolutional neural network HiCPlus.

    Science.gov (United States)

    Zhang, Yan; An, Lin; Xu, Jie; Zhang, Bo; Zheng, W Jim; Hu, Ming; Tang, Jijun; Yue, Feng

    2018-02-21

    Although Hi-C technology is one of the most popular tools for studying 3D genome organization, due to sequencing cost, the resolution of most Hi-C datasets are coarse and cannot be used to link distal regulatory elements to their target genes. Here we develop HiCPlus, a computational approach based on deep convolutional neural network, to infer high-resolution Hi-C interaction matrices from low-resolution Hi-C data. We demonstrate that HiCPlus can impute interaction matrices highly similar to the original ones, while only using 1/16 of the original sequencing reads. We show that the models learned from one cell type can be applied to make predictions in other cell or tissue types. Our work not only provides a computational framework to enhance Hi-C data resolution but also reveals features underlying the formation of 3D chromatin interactions.

  17. Prospects for higher spatial resolution quantitative X-ray analysis using transition element L-lines

    Science.gov (United States)

    Statham, P.; Holland, J.

    2014-03-01

    Lowering electron beam kV reduces electron scattering and improves spatial resolution of X-ray analysis. However, a previous round robin analysis of steels at 5 - 6 kV using Lα-lines for the first row transition elements gave poor accuracies. Our experiments on SS63 steel using Lα-lines show similar biases in Cr and Ni that cannot be corrected with changes to self-absorption coefficients or carbon coating. The inaccuracy may be caused by different probabilities for emission and anomalous self-absorption for the La-line between specimen and pure element standard. Analysis using Ll(L3-M1)-lines gives more accurate results for SS63 plausibly because the M1-shell is not so vulnerable to the atomic environment as the unfilled M4,5-shell. However, Ll-intensities are very weak and WDS analysis may be impractical for some applications. EDS with large area SDD offers orders of magnitude faster analysis and achieves similar results to WDS analysis with Lα-lines but poorer energy resolution precludes the use of Ll-lines in most situations. EDS analysis of K-lines at low overvoltage is an alternative strategy for improving spatial resolution that could give higher accuracy. The trade-off between low kV versus low overvoltage is explored in terms of sensitivity for element detection for different elements.

  18. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  19. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  20. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Science.gov (United States)

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  1. Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data

    Science.gov (United States)

    Keller, Bryan; Chen, Jianshen

    2016-01-01

    Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…

  2. Higher Education: Reputational Effects, Distorted Signaling and Propitious Selection

    Directory of Open Access Journals (Sweden)

    Elena V. Savitskaya

    2017-03-01

    Full Text Available In the paper the authors attempt to underpin the hypothesis that under certain conditions a propitious selection may take place on the higher education market. It is a phenomenon when brand universities automatically reproduce their positive reputation without improving the quality of teaching due to influx of talented entrants. The authors apply econometric modelling and regression analysis based on survey of first-year students from Moscow to demonstrate that graduates with high USE marks really prefer to choose among brand universities; moreover, they appreciate a possibility to obtain a prestigious diploma even more than that of acquiring a particular profession. However, entrants do not possess full information about the quality of teaching in a particular university. The analysis presented in the paper shows that university rankings do not contribute to overcoming of this information asymmetry, since they transmit distorted signals caused by the methodology of ranking. The rankings, first of all, accentuate academic activity of teachers rather than educational process and interaction with students. For this reason, higher schools often adopt such a strategy to meet the ranking criteria as much as possible; they also tend to improve namely these indicators disregarding the other to become a leader. As a result, brand universities may surpass ordinary universities not due to rendering educational services of higher quality but due to selection of best entrants and peer-effects. These factors allow them to have excellent graduates, thus maintain positive reputation in employers’ opinion and simultaneously raise the brand value by advancing in a ranking.

  3. Geometrical analysis of woven fabric microstructure based on micron-resolution computed tomography data

    Science.gov (United States)

    Krieger, Helga; Seide, Gunnar; Gries, Thomas; Stapleton, Scott E.

    2018-04-01

    The global mechanical properties of textiles such as elasticity and strength, as well as transport properties such as permeability depend strongly on the microstructure of the textile. Textiles are heterogeneous structures with highly anisotropic material properties, including local fiber orientation and local fiber volume fraction. In this paper, an algorithm is presented to generate a virtual 3D-model of a woven fabric architecture with information about the local fiber orientation and the local fiber volume fraction. The geometric data of the woven fabric impregnated with resin was obtained by micron-resolution computed tomography (μCT). The volumetric μCT-scan was discretized into cells and the microstructure of each cell was analyzed and homogenized. Furthermore, the discretized data was used to calculate the local permeability tensors of each cell. An example application of the analyzed data is the simulation of the resin flow through a woven fabric based on the determined local permeability tensors and on Darcy's law. The presented algorithm is an automated and robust method of going from μCT-scans to structural or flow models.

  4. Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.

  5. Distributed Modeling with Parflow using High Resolution LIDAR Data

    Science.gov (United States)

    Barnes, M.; Welty, C.; Miller, A. J.

    2012-12-01

    Urban landscapes provide a challenging domain for the application of distributed surface-subsurface hydrologic models. Engineered water infrastructure and altered topography influence surface and subsurface flow paths, yet these effects are difficult to quantify. In this work, a parallel, distributed watershed model (ParFlow) is used to simulate urban watersheds using spatial data at the meter and sub-meter scale. An approach using GRASS GIS (Geographic Resources Analysis Support System) is presented that incorporates these data to construct inputs for the ParFlow simulation. LIDAR topography provides the basis for the fully coupled overland flow simulation. Methods to address real discontinuities in the urban land-surface for use with the grid-based kinematic wave approximation used in ParFlow are presented. The spatial distribution of impervious surface is delineated accurately from high-resolution land cover data; hydrogeological properties are specified from literature values. An application is presented for part of the Dead Run subwatershed of the Gwynns Falls in Baltimore County, MD. The domain is approximately 3 square kilometers, and includes a highly impacted urban stream, a major freeway, and heterogeneous urban development represented at a 10-m horizontal resolution and 1-m vertical resolution. This resolution captures urban features such as building footprints and highways at an appropriate scale. The Dead Run domain provides an effective test case for ParFlow application at the fine scale in an urban environment. Preliminary model runs employ a homogeneous subsurface domain with no-flow boundaries. Initial results reflect the highly articulated topography of the road network and the combined influence of surface runoff from impervious surfaces and subsurface flux toward the channel network. Subsequent model runs will include comparisons of the coupled surface-subsurface response of alternative versions of the Dead Run domain with and without impervious

  6. Dose-dependent high-resolution electron ptychography

    International Nuclear Information System (INIS)

    D'Alfonso, A. J.; Allen, L. J.; Sawada, H.; Kirkland, A. I.

    2016-01-01

    Recent reports of electron ptychography at atomic resolution have ushered in a new era of coherent diffractive imaging in the context of electron microscopy. We report and discuss electron ptychography under variable electron dose conditions, exploring the prospects of an approach which has considerable potential for imaging where low dose is needed

  7. NCAR High-resolution Land Data Assimilation System and Its Recent Applications

    Science.gov (United States)

    Chen, F.; Manning, K.; Barlage, M.; Gochis, D.; Tewari, M.

    2008-05-01

    A High-Resolution Land Data Assimilation System (HRLDAS) has been developed at NCAR to meet the need for high-resolution initial conditions of land state (soil moisture and temperature) by today's numerical weather prediction models coupled to a land surface model such as the WRF/Noah coupled modeling system. Intended for conterminous US application, HRLDAS uses observed hourly 4-km national precipitation analysis and satellite-derived surface-solar-downward radiation to drive, in uncoupled mode, the Noah land surface model to simulate long-term evolution of soil state. The advantage of HRLDAS is its use of 1-km resolution land-use and soil texture maps and 4-km rainfall data. As a result, it is able to capture fine-scale heterogeneity at the surface and in the soil. The ultimate goal of HRLDAS development is to characterize soil moisture/temperature and vegetation variability at small scales (~4km) over large areas to provide improved initial land and vegetation conditions for the WRF/Noah coupled model. Hence, HRLDAS is configured after the WRF/Noah coupled model configuration to ensure the consistency in model resolution, physical configuration (e.g., terrain height), soil model, and parameters between the uncoupled soil initialization system and its coupled forecast counterpart. We will discuss various characteristics of HRLDAS, including its spin-up and sensitivity to errors in forcing data. We will describe recent enhancement in terms of hydrological modeling and the use of remote sensing data. We will discuss recent applications of HRLDAS for flood forecast, agriculture, and arctic land system.

  8. The Role of Resolution in the Estimation of Fractal Dimension Maps From SAR Data

    Directory of Open Access Journals (Sweden)

    Gerardo Di Martino

    2017-12-01

    Full Text Available This work is aimed at investigating the role of resolution in fractal dimension map estimation, analyzing the role of the different surface spatial scales involved in the considered estimation process. The study is performed using a data set of actual Cosmo/SkyMed Synthetic Aperture Radar (SAR images relevant to two different areas, the region of Bidi in Burkina Faso and the city of Naples in Italy, acquired in stripmap and enhanced spotlight modes. The behavior of fractal dimension maps in the presence of areas with distinctive characteristics from the viewpoint of land-cover and surface features is discussed. Significant differences among the estimated maps are obtained in the presence of fine textural details, which significantly affect the fractal dimension estimation for the higher resolution spotlight images. The obtained results show that if we are interested in obtaining a reliable estimate of the fractal dimension of the observed natural scene, stripmap images should be chosen in view of both economic and computational considerations. In turn, the combination of fractal dimension maps obtained from stripmap and spotlight images can be used to identify areas on the scene presenting non-fractal behavior (e.g., urban areas. Along this guideline, a simple example of stripmap-spotlight data fusion is also presented.

  9. Automated Generation of the Alaska Coastline Using High-Resolution Satellite Imagery

    Science.gov (United States)

    Roth, G.; Porter, C. C.; Cloutier, M. D.; Clementz, M. E.; Reim, C.; Morin, P. J.

    2015-12-01

    Previous campaigns to map Alaska's coast at high resolution have relied on airborne, marine, or ground-based surveying and manual digitization. The coarse temporal resolution, inability to scale geographically, and high cost of field data acquisition in these campaigns is inadequate for the scale and speed of recent coastal change in Alaska. Here, we leverage the Polar Geospatial Center (PGC) archive of DigitalGlobe, Inc. satellite imagery to produce a state-wide coastline at 2 meter resolution. We first select multispectral imagery based on time and quality criteria. We then extract the near-infrared (NIR) band from each processed image, and classify each pixel as water or land with a pre-determined NIR threshold value. Processing continues with vectorizing the water-land boundary, removing extraneous data, and attaching metadata. Final coastline raster and vector products maintain the original accuracy of the orthorectified satellite data, which is often within the local tidal range. The repeat frequency of coastline production can range from 1 month to 3 years, depending on factors such as satellite capacity, cloud cover, and floating ice. Shadows from trees or structures complicate the output and merit further data cleaning. The PGC's imagery archive, unique expertise, and computing resources enabled us to map the Alaskan coastline in a few months. The DigitalGlobe archive allows us to update this coastline as new imagery is acquired, and facilitates baseline data for studies of coastal change and improvement of topographic datasets. Our results are not simply a one-time coastline, but rather a system for producing multi-temporal, automated coastlines. Workflows and tools produced with this project can be freely distributed and utilized globally. Researchers and government agencies must now consider how they can incorporate and quality-control this high-frequency, high-resolution data to meet their mapping standards and research objectives.

  10. Higher aluminum concentration in Alzheimer's disease after Box-Cox data transformation.

    Science.gov (United States)

    Rusina, Robert; Matěj, Radoslav; Kašparová, Lucie; Kukal, Jaromír; Urban, Pavel

    2011-11-01

    Evidence regarding the role of mercury and aluminum in the pathogenesis of Alzheimer's disease (AD) remains controversial. The aims of our project were to investigate the content of the selected metals in brain tissue samples and the use of a specific mathematical transform to eliminate the disadvantage of a strong positive skew in the original data distribution. In this study, we used atomic absorption spectrophotometry to determine mercury and aluminum concentrations in the hippocampus and associative visual cortex of 29 neuropathologically confirmed AD and 27 age-matched controls. The Box-Cox data transformation was used for statistical evaluation. AD brains had higher mean aluminum concentrations in the hippocampus than controls (0.357 vs. 0.090 μg/g; P = 0.039) after data transformation. Results for mercury were not significant. Original data regarding microelement concentrations are heavily skewed and do not pass the normality test in general. A Box-Cox transformation can eliminate this disadvantage and allow parametric testing.

  11. Just in Time Research: Data Breaches in Higher Education

    Science.gov (United States)

    Grama, Joanna

    2014-01-01

    This "Just in Time" research is in response to recent discussions on the EDUCAUSE Higher Education Information Security Council (HEISC) discussion list about data breaches in higher education. Using data from the Privacy Rights Clearinghouse, this research analyzes data breaches attributed to higher education. The results from this…

  12. Merging thermal and microwave satellite observations for a high-resolution soil moisture data product

    Science.gov (United States)

    Many societal applications of soil moisture data products require high spatial resolution and numerical accuracy. Current thermal geostationary satellite sensors (GOES Imager and GOES-R ABI) could produce 2-16km resolution soil moisture proxy data. Passive microwave satellite radiometers (e.g. AMSR...

  13. Performance of a First-Level Muon Trigger with High Momentum Resolution Based on the ATLAS MDT Chambers for HL-LHC

    CERN Document Server

    Gadow, P.; Kortner, S.; Kroha, H.; Müller, F.; Richter, R.

    2016-01-01

    Highly selective first-level triggers are essential to exploit the full physics potential of the ATLAS experiment at High-Luminosity LHC (HL-LHC). The concept for a new muon trigger stage using the precision monitored drift tube (MDT) chambers to significantly improve the selectivity of the first-level muon trigger is presented. It is based on fast track reconstruction in all three layers of the existing MDT chambers, made possible by an extension of the first-level trigger latency to six microseconds and a new MDT read-out electronics required for the higher overall trigger rates at the HL-LHC. Data from $pp$-collisions at $\\sqrt{s} = 8\\,\\mathrm{TeV}$ is used to study the minimal muon transverse momentum resolution that can be obtained using the MDT precision chambers, and to estimate the resolution and efficiency of the MDT-based trigger. A resolution of better than $4.1\\%$ is found in all sectors under study. With this resolution, a first-level trigger with a threshold of $18\\,\\mathrm{GeV}$ becomes fully e...

  14. Sensitivity of drainage morphometry based hydrological response (GIUH) of a river basin to the spatial resolution of DEM data

    Science.gov (United States)

    Sahoo, Ramendra; Jain, Vikrant

    2018-02-01

    Drainage network pattern and its associated morphometric ratios are some of the important plan form attributes of a drainage basin. Extraction of these attributes for any basin is usually done by spatial analysis of the elevation data of that basin. These planform attributes are further used as input data for studying numerous process-response interactions inside the physical premise of the basin. One of the important uses of the morphometric ratios is its usage in the derivation of hydrologic response of a basin using GIUH concept. Hence, accuracy of the basin hydrological response to any storm event depends upon the accuracy with which, the morphometric ratios can be estimated. This in turn, is affected by the spatial resolution of the source data, i.e. the digital elevation model (DEM). We have estimated the sensitivity of the morphometric ratios and the GIUH derived hydrograph parameters, to the resolution of source data using a 30 meter and a 90 meter DEM. The analysis has been carried out for 50 drainage basins in a mountainous catchment. A simple and comprehensive algorithm has been developed for estimation of the morphometric indices from a stream network. We have calculated all the morphometric parameters and the hydrograph parameters for each of these basins extracted from two different DEMs, with different spatial resolutions. Paired t-test and Sign test were used for the comparison. Our results didn't show any statistically significant difference among any of the parameters calculated from the two source data. Along with the comparative study, a first-hand empirical analysis about the frequency distribution of the morphometric and hydrologic response parameters has also been communicated. Further, a comparison with other hydrological models suggests that plan form morphometry based GIUH model is more consistent with resolution variability in comparison to topographic based hydrological model.

  15. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    Science.gov (United States)

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical

  16. Multi-task feature selection in microarray data by binary integer programming.

    Science.gov (United States)

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  17. Explicit higher order symplectic integrator for s-dependent magnetic field

    International Nuclear Information System (INIS)

    Wu, Y.; Forest, E.; Robin, D.S.

    2001-01-01

    We derive second and higher order explicit symplectic integrators for the charged particle motion in an s-dependent magnetic field with the paraxial approximation. The Hamiltonian of such a system takes the form of H (summation) k (p k - a k (rvec q), s) 2 + V((rvec q), s). This work solves a long-standing problem for modeling s-dependent magnetic elements. Important applications of this work include the studies of the charged particle dynamics in a storage ring with strong field wigglers, arbitrarily polarized insertion devices,and super-conducting magnets with strong fringe fields. Consequently, this work will have a significant impact on the optimal use of the above magnetic devices in the light source rings as well as in next generation linear collider damping rings

  18. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  19. Example-Based Super-Resolution Fluorescence Microscopy.

    Science.gov (United States)

    Jia, Shu; Han, Boran; Kutz, J Nathan

    2018-04-23

    Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.

  20. Comparative analyses of basal rate of metabolism in mammals: data selection does matter.

    Science.gov (United States)

    Genoud, Michel; Isler, Karin; Martin, Robert D

    2018-02-01

    Basal rate of metabolism (BMR) is a physiological parameter that should be measured under strictly defined experimental conditions. In comparative analyses among mammals BMR is widely used as an index of the intensity of the metabolic machinery or as a proxy for energy expenditure. Many databases with BMR values for mammals are available, but the criteria used to select metabolic data as BMR estimates have often varied and the potential effect of this variability has rarely been questioned. We provide a new, expanded BMR database reflecting compliance with standard criteria (resting, postabsorptive state; thermal neutrality; adult, non-reproductive status for females) and examine potential effects of differential selectivity on the results of comparative analyses. The database includes 1739 different entries for 817 species of mammals, compiled from the original sources. It provides information permitting assessment of the validity of each estimate and presents the value closest to a proper BMR for each entry. Using different selection criteria, several alternative data sets were extracted and used in comparative analyses of (i) the scaling of BMR to body mass and (ii) the relationship between brain mass and BMR. It was expected that results would be especially dependent on selection criteria with small sample sizes and with relatively weak relationships. Phylogenetically informed regression (phylogenetic generalized least squares, PGLS) was applied to the alternative data sets for several different clades (Mammalia, Eutheria, Metatheria, or individual orders). For Mammalia, a 'subsampling procedure' was also applied, in which random subsamples of different sample sizes were taken from each original data set and successively analysed. In each case, two data sets with identical sample size and species, but comprising BMR data with different degrees of reliability, were compared. Selection criteria had minor effects on scaling equations computed for large clades

  1. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    Science.gov (United States)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  2. Super-resolution fluorescence imaging of nanoimprinted polymer patterns by selective fluorophore adsorption combined with redox switching

    KAUST Repository

    Yabiku, Y.

    2013-10-22

    We applied a super-resolution fluorescence imaging based on selective adsorption and redox switching of the fluorescent dye molecules for studying polymer nanostructures. We demonstrate that nano-scale structures of polymer thin films can be visualized with the image resolution better than 80 nm. The method was applied to image 100 nm-wide polymer nanopatterns fabricated by thermal nanoimprinting. The results point to the applicability of the method for evaluating residual polymer thin films and dewetting defect of the polymer resist patterns which are important for the quality control of the fine nanoimprinted patterns. 2013 Author(s).

  3. Super-resolution fluorescence imaging of nanoimprinted polymer patterns by selective fluorophore adsorption combined with redox switching

    Directory of Open Access Journals (Sweden)

    Yu Yabiku

    2013-10-01

    Full Text Available We applied a super-resolution fluorescence imaging based on selective adsorption and redox switching of the fluorescent dye molecules for studying polymer nanostructures. We demonstrate that nano-scale structures of polymer thin films can be visualized with the image resolution better than 80 nm. The method was applied to image 100 nm-wide polymer nanopatterns fabricated by thermal nanoimprinting. The results point to the applicability of the method for evaluating residual polymer thin films and dewetting defect of the polymer resist patterns which are important for the quality control of the fine nanoimprinted patterns.

  4. Detecting selection needs comparative data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Hubisz, Melissa J.

    2005-01-01

    Positive selection at the molecular level is usually indicated by an increase in the ratio of non-synonymous to synonymous substitutions (dN/dS) in comparative data. However, Plotkin et al. 1 describe a new method for detecting positive selection based on a single nucleotide sequence. We show here...... that this method is particularly sensitive to assumptions regarding the underlying mutational processes and does not provide a reliable way to identify positive selection....

  5. Characterization and consequences of intermittent sediment oxygenation by macrofauna: interpretation of high-resolution data sets

    Science.gov (United States)

    Meile, C. D.; Dwyer, I.; Zhu, Q.; Polerecky, L.; Volkenborn, N.

    2017-12-01

    Mineralization of organic matter in marine sediments leads to the depletion of oxygen, while activities of infauna introduce oxygenated seawater to the subsurface. In permeable sediments solutes can be transported from animals and their burrows into the surrounding sediment through advection over several centimeters. The intermittency of pumping leads to a spatially heterogeneous distribution of oxidants, with the temporal dynamics depending on sediment reactivity and activity patterns of the macrofauna. Here, we present results from a series of experiments in which these dynamics are studied at high spatial and temporal resolution using planar optodes. From O2, pH and pCO2 optode data, we quantify rates of O2 consumption and dissolved inorganic carbon production, as well alkalinity dynamics, with millimeter-scale resolution. Simulating intermittent irrigation by imposed pumping patterns in thin aquaria, we derive porewater flow patterns, which together with the production and consumption rates cause the chemical distributions and the establishment of reaction fronts. Our analysis thus establishes a quantitative connection between the locally dynamic redox conditions relevant for biogeochemical transformations and macroscopic observations commonly made with sediment cores.

  6. Arg279 is the key regulator of coenzyme selectivity in the flavin-dependent ornithine monooxygenase SidA.

    Science.gov (United States)

    Robinson, Reeder; Franceschini, Stefano; Fedkenheuer, Michael; Rodriguez, Pedro J; Ellerbrock, Jacob; Romero, Elvira; Echandi, Maria Paulina; Martin Del Campo, Julia S; Sobrado, Pablo

    2014-04-01

    Siderophore A (SidA) is a flavin-dependent monooxygenase that catalyzes the NAD(P)H- and oxygen-dependent hydroxylation of ornithine in the biosynthesis of siderophores in Aspergillus fumigatus and is essential for virulence. SidA can utilize both NADPH or NADH for activity; however, the enzyme is selective for NADPH. Structural analysis shows that R279 interacts with the 2'-phosphate of NADPH. To probe the role of electrostatic interactions in coenzyme selectivity, R279 was mutated to both an alanine and a glutamate. The mutant proteins were active but highly uncoupled, oxidizing NADPH and producing hydrogen peroxide instead of hydroxylated ornithine. For wtSidA, the catalytic efficiency was 6-fold higher with NADPH as compared to NADH. For the R279A mutant the catalytic efficiency was the same with both coenyzmes, while for the R279E mutant the catalytic efficiency was 5-fold higher with NADH. The effects are mainly due to an increase in the KD values, as no major changes on the kcat or flavin reduction values were observed. Thus, the absence of a positive charge leads to no coenzyme selectivity while introduction of a negative charge leads to preference for NADH. Flavin fluorescence studies suggest altered interaction between the flavin and NADP⁺ in the mutant enzymes. The effects are caused by different binding modes of the coenzyme upon removal of the positive charge at position 279, as no major conformational changes were observed in the structure for R279A. The results indicate that the positive charge at position 279 is critical for tight binding of NADPH and efficient hydroxylation. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data

    Science.gov (United States)

    Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis

    2015-04-01

    Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the

  8. High Resolution Higher Energy X-ray Microscope for Mesoscopic Materials

    International Nuclear Information System (INIS)

    Snigireva, I; Snigirev, A

    2013-01-01

    We developed a novel X-ray microscopy technique to study mesoscopically structured materials, employing compound refractive lenses. The easily seen advantage of lens-based methodology is the possibility to retrieve high resolution diffraction pattern and real-space images in the same experimental setup. Methodologically the proposed approach is similar to the studies of crystals by high resolution transmission electron microscopy. The proposed microscope was applied for studying of mesoscopic materials such as natural and synthetic opals, inverted photonic crystals

  9. Data composition and taxonomic resolution in macroinvertebrate stream typology

    NARCIS (Netherlands)

    Verdonschot, P.F.M.

    2006-01-01

    In the EU water framework directive (WFD) a typological framework is defined for assessing the ecological quality of water bodies in the future. The aim of this study was to test the effect of data composition and taxonomic resolution on this typology. The EU research projects AQEM and STAR provided

  10. Do Red Edge and Texture Attributes from High-Resolution Satellite Data Improve Wood Volume Estimation in a Semi-Arid Mountainous Region?

    DEFF Research Database (Denmark)

    Schumacher, Paul; Mislimshoeva, Bunafsha; Brenning, Alexander

    2016-01-01

    to overcome this issue. However, clear recommendations on the suitability of specific proxies to provide accurate biomass information in semi-arid to arid environments are still lacking. This study contributes to the understanding of using multispectral high-resolution satellite data (RapidEye), specifically...... red edge and texture attributes, to estimate wood volume in semi-arid ecosystems characterized by scarce vegetation. LASSO (Least Absolute Shrinkage and Selection Operator) and random forest were used as predictive models relating in situ-measured aboveground standing wood volume to satellite data...

  11. Submicrovolt resolution X-ray monochromators

    International Nuclear Information System (INIS)

    Trammell, G.T.; Hannon, J.P.

    1984-01-01

    Two methods are available to obtain monochromatic x-radiation from a white source: wavelength selection and frequency selection. The resolution of wavelength selection methods is limited to 1-10 MeV in the E = 10 KeV range. To exceed this resolution frequency selection methods based on nuclear resonance scattering can be used. Devices which give strong nuclear resonance reflections but weak electronic reflections are candidates for components of frequency selection monochromates. Some examples are discussed

  12. Influence of rebinning on the reconstructed resolution of fan-beam SPECT

    International Nuclear Information System (INIS)

    Koole, M.; D'Asseler, Y.; Staelens, S.; Vandenberghe, S.; Eede, I. van den; Walle, R. van de; Lemahieu, I.

    2002-01-01

    Aim: Fan-beam projection data can be rebinned to a parallel-beam geometry. This rebinning operation allows these data to be reconstructed with algorithms for parallel-beam projection data. The advantage of such an operation is that a dedicated projection/backprojection step for fan-beam geometry doesn't need to be developed. In clinical practice bilinear interpolation is often used for this rebinning operation. The aim of this study is to investigate the influence of the rebinning operation on the resolution properties of the reconstructed SPECT-image. Materials and methods: We have simulated the resolution properties of a fan-beam collimator, used in clinical routine, by means of a dedicated projector operation which models the distance dependent sensitivity and resolution of the collimator. With this projector, we generated noise-free sinograms for a point source located at various distances from the center of rotation. The number of angles of these sinograms varied from 60 to 180, corresponding to a step angle of 6 to 2 degrees. These generated fan-beam projection data were reconstructed directly with a filtered backprojection algorithm for fan-beam projection data, which consists of weighting and filtering the projection data with a ramp filter and of a weighted backprojection. Next, the generated fan-beam projection data were rebinned by means of bilinear interpolation and reconstructed with standard filtered backprojection for parallel-beam data. A two-dimensional Gaussian was fitted to the two point sources, one reconstructed with FBP for fan-beam and one reconstructed with FBP for parallel-beam after rebinning, yielding an estimate for the reconstructed Full Width at Half Maximum (FWHM) in the radial and tangential direction, for different locations in the field of view. Results: Results show little difference in resolution degradation in the radial direction between direct reconstruction and reconstruction after rebinning. However, significant loss in

  13. Moving towards Hyper-Resolution Hydrologic Modeling

    Science.gov (United States)

    Rouf, T.; Maggioni, V.; Houser, P.; Mei, Y.

    2017-12-01

    Developing a predictive capability for terrestrial hydrology across landscapes, with water, energy and nutrients as the drivers of these dynamic systems, faces the challenge of scaling meter-scale process understanding to practical modeling scales. Hyper-resolution land surface modeling can provide a framework for addressing science questions that we are not able to answer with coarse modeling scales. In this study, we develop a hyper-resolution forcing dataset from coarser resolution products using a physically based downscaling approach. These downscaling techniques rely on correlations with landscape variables, such as topography, roughness, and land cover. A proof-of-concept has been implemented over the Oklahoma domain, where high-resolution observations are available for validation purposes. Hourly NLDAS (North America Land Data Assimilation System) forcing data (i.e., near-surface air temperature, pressure, and humidity) have been downscaled to 500m resolution over the study area for 2015-present. Results show that correlation coefficients between the downscaled temperature dataset and ground observations are consistently higher than the ones between the NLDAS temperature data at their native resolution and ground observations. Not only correlation coefficients are higher, but also the deviation around the 1:1 line in the density scatterplots is smaller for the downscaled dataset than the original one with respect to the ground observations. Results are therefore encouraging as they demonstrate that the 500m temperature dataset has a good agreement with the ground information and can be adopted to force the land surface model for soil moisture estimation. The study has been expanded to wind speed and direction, incident longwave and shortwave radiation, pressure, and precipitation. Precipitation is well known to vary dramatically with elevation and orography. Therefore, we are pursuing a downscaling technique based on both topographical and vegetation

  14. High resolution data acquisition

    Science.gov (United States)

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  15. The 1 km resolution global data set: needs of the International Geosphere Biosphere Programme

    Science.gov (United States)

    Townshend, J.R.G.; Justice, C.O.; Skole, D.; Malingreau, J.-P.; Cihlar, J.; Teillet, P.; Sadowski, F.; Ruttenberg, S.

    1994-01-01

    Examination of the scientific priorities for the International Geosphere Biosphere Programme (IGBP) reveals a requirement for global land data sets in several of its Core Projects. These data sets need to be at several space and time scales. Requirements are demonstrated for the regular acquisition of data at spatial resolutions of 1 km and finer and at high temporal frequencies. Global daily data at a resolution of approximately 1 km are sensed by the Advanced Very High Resolution Radiometer (AVHRR), but they have not been available in a single archive. It is proposed, that a global data set of the land surface is created from remotely sensed data from the AVHRR to support a number of IGBP's projects. This data set should have a spatial resolution of 1 km and should be generated at least once every 10 days for the entire globe. The minimum length of record should be a year, and ideally a system should be put in place which leads to the continuous acquisition of 1 km data to provide a base line data set prior to the Earth Observing System (EOS) towards the end of the decade. Because of the high cloud cover in many parts of the world, it is necessary to plan for the collection of data from every orbit. Substantial effort will be required in the preprocessing of the data set involving radiometric calibration, atmospheric correction, geometric correction and temporal compositing, to make it suitable for the extraction of information.

  16. Temporal dependence of the selectivity property of SES stations in western Greece

    Directory of Open Access Journals (Sweden)

    E. Dologlou

    2009-06-01

    Full Text Available The selectivity property of the SES stations, IOA, PIR and PAT in western Greece, based on reported precursory SES signals and associated large earthquakes (Mw≥5.4 that occurred from 1983 to the end of 2008, has been examined. Interesting temporal dependence of the sensitive ability of these stations has been unveiled. Physical mechanisms for the observed changes in selectivity might be related with tectonic and geodynamic events. For instance, selectivity for IOA exhibits a time dependence, for PAT probably is related to the activation of Wadati-Benioff zone while for PIR seems to be related to the specific tectonics of two confined areas such as the Cephalonia Transform Faulting zone in Ionian Sea and the southwestern part of the Hellenic Trench.

  17. Theoretical and computational study of the energy dependence of the muon transfer rate from hydrogen to higher-Z gases

    Energy Technology Data Exchange (ETDEWEB)

    Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Tsarigradsko chaussée 72, Sofia 1784 (Bulgaria); Adamczak, Andrzej [Institute of Nuclear Physics, Polish Academy of Sciences, ul. Radzikowskiego 152, 31-342 Krakow (Poland); Stoilov, Mihail [Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Tsarigradsko chaussée 72, Sofia 1784 (Bulgaria); Vacchi, Andrea [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Via A. Valerio 2, 34127 Trieste (Italy)

    2015-01-23

    The recent PSI Lamb shift experiment and the controversy about proton size revived the interest in measuring the hyperfine splitting in muonic hydrogen as an alternative possibility for comparing ordinary and muonic hydrogen spectroscopy data on proton electromagnetic structure. This measurement critically depends on the energy dependence of the muon transfer rate to heavier gases in the epithermal range. The available data provide only qualitative information, and the theoretical predictions have not been verified. We propose a new method by measurements of the transfer rate in thermalized target at different temperatures, estimate its accuracy and investigate the optimal experimental conditions. - Highlights: • Method for measuring the energy dependence of muon transfer rate to higher-Z gases. • Thermalization and depolarization of muonic hydrogen studied by Monte Carlo method. • Optimal experimental conditions determined by Monte Carlo simulations. • Mathematical model and for estimating the uncertainty of the experimental results.

  18. Selective functional activity measurement of a PEGylated protein with a modification-dependent activity assay.

    Science.gov (United States)

    Weber, Alfred; Engelmaier, Andrea; Mohr, Gabriele; Haindl, Sonja; Schwarz, Hans Peter; Turecek, Peter L

    2017-01-05

    BAX 855 (ADYNOVATE) is a PEGylated recombinant factor VIII (rFVIII) that showed prolonged circulatory half-life compared to unmodified rFVIII in hemophilic patients. Here, the development and validation of a novel assay is described that selectively measures the activity of BAX 855 as cofactor for the serine protease factor IX, which actives factor X. This method type, termed modification-dependent activity assay, is based on PEG-specific capture of BAX 855 by an anti-PEG IgG preparation, followed by a chromogenic FVIII activity assay. The assay principle enabled sensitive measurement of the FVIII cofactor activity of BAX 855 down to the pM-range without interference by non-PEGylated FVIII. The selectivity of the capture step, shown by competition studies to primarily target the terminal methoxy group of PEG, also allowed assessment of the intactness of the attached PEG chains. Altogether, the modification-dependent activity not only enriches, but complements the group of methods to selectively, accurately, and precisely measure a PEGylated drug in complex biological matrices. In contrast to all other methods described so far, it allows measurement of the biological activity of the PEGylated protein. Data obtained demonstrate that this new method principle can be extended to protein modifications other than PEGylation and to a variety of functional activity assays. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A new Highly Selective First Level ATLAS Muon Trigger With MDT Chamber Data for HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC where the instantaneous luminosity will exceed the LHC's instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum sub-trigger threshold muons due to the poor momentum resolution at trigger level caused by the moderate spatial resolution of the resistive plate and thin gap trigger chambers. This limitation can be overcome by including the data of the precision muon drift tube chambers in the first level trigger decision. This requires the implementation of a fast MDT read-out chain and a fast MDT track reconstruction. A hardware demonstrator of the fast read-out chain was successfully tested under HL-LHC operating conditions at CERN's Gamma Irradiation Facility. It could be shown that the data provided by the demonstrator can be processed with a fast track reconstruction algorithm on an ARM CPU within the 6 microseconds latency...

  20. Scale-dependent gas hydrate saturation estimates in sand reservoirs in the Ulleung Basin, East Sea of Korea

    Science.gov (United States)

    Lee, Myung Woong; Collett, Timothy S.

    2013-01-01

    Through the use of 2-D and 3-D seismic data, several gas hydrate prospects were identified in the Ulleung Basin, East Sea of Korea and thirteen drill sites were established and logging-while-drilling (LWD) data were acquired from each site in 2010. Sites UBGH2–6 and UBGH2–10 were selected to test a series of high amplitude seismic reflections, possibly from sand reservoirs. LWD logs from the UBGH2–6 well indicate that there are three significant sand reservoirs with varying thickness. Two upper sand reservoirs are water saturated and the lower thinly bedded sand reservoir contains gas hydrate with an average saturation of 13%, as estimated from the P-wave velocity. The well logs at the UBGH2–6 well clearly demonstrated the effect of scale-dependency on gas hydrate saturation estimates. Gas hydrate saturations estimated from the high resolution LWD acquired ring resistivity (vertical resolution of about 5–8 cm) reaches about 90% with an average saturation of 28%, whereas gas hydrate saturations estimated from the low resolution A40L resistivity (vertical resolution of about 120 cm) reaches about 25% with an average saturation of 11%. However, in the UBGH2–10 well, gas hydrate occupies a 5-m thick sand reservoir near 135 mbsf with a maximum saturation of about 60%. In the UBGH2–10 well, the average and a maximum saturation estimated from various well logging tools are comparable, because the bed thickness is larger than the vertical resolution of the various logging tools. High resolution wireline log data further document the role of scale-dependency on gas hydrate calculations.

  1. Analysis of seafloor backscatter strength dependence on the survey azimuth using multibeam echosounder data

    Science.gov (United States)

    Lurton, Xavier; Eleftherakis, Dimitrios; Augustin, Jean-Marie

    2018-06-01

    The sediment backscatter strength measured by multibeam echosounders is a key feature for seafloor mapping either qualitative (image mosaics) or quantitative (extraction of classifying features). An important phenomenon, often underestimated, is the dependence of the backscatter level on the azimuth angle imposed by the survey line directions: strong level differences at varying azimuth can be observed in case of organized roughness of the seabed, usually caused by tide currents over sandy sediments. This paper presents a number of experimental results obtained from shallow-water cruises using a 300-kHz multibeam echosounder and specially dedicated to the study of this azimuthal effect, with a specific configuration of the survey strategy involving a systematic coverage of reference areas following "compass rose" patterns. The results show for some areas a very strong dependence of the backscatter level, up to about 10-dB differences at intermediate oblique angles, although the presence of these ripples cannot be observed directly—neither from the bathymetry data nor from the sonar image, due to the insufficient resolution capability of the sonar. An elementary modeling of backscattering from rippled interfaces explains and comforts these observations. The consequences of this backscatter dependence upon survey azimuth on the current strategies of backscatter data acquisition and exploitation are discussed.

  2. Using Low Resolution Satellite Imagery for Yield Prediction and Yield Anomaly Detection

    Directory of Open Access Journals (Sweden)

    Oscar Rojas

    2013-04-01

    Full Text Available Low resolution satellite imagery has been extensively used for crop monitoring and yield forecasting for over 30 years and plays an important role in a growing number of operational systems. The combination of their high temporal frequency with their extended geographical coverage generally associated with low costs per area unit makes these images a convenient choice at both national and regional scales. Several qualitative and quantitative approaches can be clearly distinguished, going from the use of low resolution satellite imagery as the main predictor of final crop yield to complex crop growth models where remote sensing-derived indicators play different roles, depending on the nature of the model and on the availability of data measured on the ground. Vegetation performance anomaly detection with low resolution images continues to be a fundamental component of early warning and drought monitoring systems at the regional scale. For applications at more detailed scales, the limitations created by the mixed nature of low resolution pixels are being progressively reduced by the higher resolution offered by new sensors, while the continuity of existing systems remains crucial for ensuring the availability of long time series as needed by the majority of the yield prediction methods used today.

  3. Evaluation of gene importance in microarray data based upon probability of selection

    Directory of Open Access Journals (Sweden)

    Fu Li M

    2005-03-01

    Full Text Available Abstract Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities.

  4. Optimal Multi-Interface Selection for Mobile Video Streaming in Efficient Battery Consumption and Data Usage

    Directory of Open Access Journals (Sweden)

    Seonghoon Moon

    2016-01-01

    Full Text Available With the proliferation of high-performance, large-screen mobile devices, users’ expectations of having access to high-resolution video content in smooth network environments are steadily growing. To guarantee such stable streaming, a high cellular network bandwidth is required; yet network providers often charge high prices for even limited data plans. Moreover, the costs of smoothly streaming high-resolution videos are not merely monetary; the device’s battery life must also be accounted for. To resolve these problems, we design an optimal multi-interface selection system for streaming video over HTTP/TCP. An optimization problem including battery life and LTE data constraints is derived and then solved using binary integer programming. Additionally, the system is designed with an adoption of split-layer scalable video coding, which provides direct adaptations of video quality and prevents out-of-order packet delivery problems. The proposed system is evaluated using a prototype application in a real, iOS-based device as well as through experiments conducted in heterogeneous mobile scenarios. Results show that the system not only guarantees the highest-possible video quality, but also prevents reckless consumption of LTE data and battery life.

  5. Time-dependent ion selectivity in capacitive charging of porous electrodes

    NARCIS (Netherlands)

    Zhao, R.; Soestbergen, M.; Rijnaarts, H.H.M.; Wal, van der A.F.; Bazant, M.Z.; Biesheuvel, P.M.

    2012-01-01

    In a combined experimental and theoretical study, we show that capacitive charging of porous electrodes in multicomponent electrolytes may lead to the phenomenon of time-dependent ion selectivity of the electrical double layers (EDLs) in the electrodes. This effect is found in experiments on

  6. Geoscience Meets Social Science: A Flexible Data Driven Approach for Developing High Resolution Population Datasets at Global Scale

    Science.gov (United States)

    Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.

    2017-12-01

    Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.

  7. High throughput screening of ligand binding to macromolecules using high resolution powder diffraction

    Science.gov (United States)

    Von Dreele, Robert B.; D'Amico, Kevin

    2006-10-31

    A process is provided for the high throughput screening of binding of ligands to macromolecules using high resolution powder diffraction data including producing a first sample slurry of a selected polycrystalline macromolecule material and a solvent, producing a second sample slurry of a selected polycrystalline macromolecule material, one or more ligands and the solvent, obtaining a high resolution powder diffraction pattern on each of said first sample slurry and the second sample slurry, and, comparing the high resolution powder diffraction pattern of the first sample slurry and the high resolution powder diffraction pattern of the second sample slurry whereby a difference in the high resolution powder diffraction patterns of the first sample slurry and the second sample slurry provides a positive indication for the formation of a complex between the selected polycrystalline macromolecule material and at least one of the one or more ligands.

  8. High-resolution seismic imaging of the Sohagpur Gondwana basin ...

    Indian Academy of Sciences (India)

    The quality of the high-resolution seismic data depends mainly on the data ..... metric rift geometry. Based on the .... Biswas S K 2003 Regional tectonic framework of the .... Sheth H C, Ray J S, Ray R, Vanderkluysen L, Mahoney J. J, Kumar A ...

  9. Interpretation of measured data and the resolution analysis of the RTP 4-channel pulsed radar

    International Nuclear Information System (INIS)

    Pavlo, P.

    1993-01-01

    The resolution of a 4-channel pulsed radar being built at Rijnhuisen for the RTP tokamak is analyzed. The achievable resolution mainly depends on the accuracy of the time-of-flight measurements and the number of sampling frequencies; since the technological solution and the configuration have already been set, emphasis is put on interpretation of the measured data (the inversion problem) and minimization of the overall error. For this purpose, a specific neural network - the Multi Layer Perceptron (MLP) - has successfully been applied. Central density in the range of 0.2-0.6 x 10 20 m -3 was considered, i.e., one above the critical density for all four frequencies but not so high as to restrict the measurements to just the edge of the plasma. By balancing the inversion error and the time measurement error, for a wide class of density profiles the overall error in estimating the reflection point position of between 0.72 cm (for the lowest frequency) and 0.52 cm (for the highest frequency) root mean square was obtained, assuming an RMS error of 70 ps in the time of flight measurements. This is probably much better than what could be obtained by the Abel transform. Moreover, mapping with the MLP is considerably faster, and it should be considered for routine multichannel pulsed radar data processing. (author) 2 tabs., 4 figs., 6 refs

  10. High Angular Resolution Measurements of the Anisotropy of Reflectance of Sea Ice and Snow

    Science.gov (United States)

    Goyens, C.; Marty, S.; Leymarie, E.; Antoine, D.; Babin, M.; Bélanger, S.

    2018-01-01

    We introduce a new method to determine the anisotropy of reflectance of sea ice and snow at spatial scales from 1 m2 to 80 m2 using a multispectral circular fish-eye radiance camera (CE600). The CE600 allows measuring radiance simultaneously in all directions of a hemisphere at a 1° angular resolution. The spectral characteristics of the reflectance and its dependency on illumination conditions obtained from the camera are compared to those obtained with a hyperspectral field spectroradiometer manufactured by Analytical Spectral Device, Inc. (ASD). Results confirm the potential of the CE600, with the suggested measurement setup and data processing, to measure commensurable sea ice and snow hemispherical-directional reflectance factor, HDRF, values. Compared to the ASD, the reflectance anisotropy measured with the CE600 provides much higher resolution in terms of directional reflectance (N = 16,020). The hyperangular resolution allows detecting features that were overlooked using the ASD due to its limited number of measurement angles (N = 25). This data set of HDRF further documents variations in the anisotropy of the reflectance of snow and ice with the geometry of observation and illumination conditions and its spectral and spatial scale dependency. Finally, in order to reproduce the hyperangular CE600 reflectance measurements over the entire 400-900 nm spectral range, a regression-based method is proposed to combine the ASD and CE600 measurements. Results confirm that both instruments may be used in synergy to construct a hyperangular and hyperspectral snow and ice reflectance anisotropy data set.

  11. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    Science.gov (United States)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  12. Analysis of high resolution satellite digital data for land use studies ...

    African Journals Online (AJOL)

    High-resolution satellite data can give vital information about land cover, which can lead to better interpretation and classification of land resources. This study examined the relationship between Systeme Probatoire d'Observation de la Terre (SPOT) digital data and land use types in the derived savanna ecosystem of ...

  13. The Study of Land Use Classification Based on SPOT6 High Resolution Data

    OpenAIRE

    Wu Song; Jiang Qigang

    2016-01-01

    A method is carried out to quick classification extract of the type of land use in agricultural areas, which is based on the spot6 high resolution remote sensing classification data and used of the good nonlinear classification ability of support vector machine. The results show that the spot6 high resolution remote sensing classification data can realize land classification efficiently, the overall classification accuracy reached 88.79% and Kappa factor is 0.8632 which means that the classif...

  14. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  15. Selective phase masking to reduce material saturation in holographic data storage systems

    Science.gov (United States)

    Phillips, Seth; Fair, Ivan

    2014-09-01

    Emerging networks and applications require enormous data storage. Holographic techniques promise high-capacity storage, given resolution of a few remaining technical issues. In this paper, we propose a technique to overcome one such issue: mitigation of large magnitude peaks in the stored image that cause material saturation resulting in readout errors. We consider the use of ternary data symbols, with modulation in amplitude and phase, and use a phase mask during the encoding stage to reduce the probability of large peaks arising in the stored Fourier domain image. An appropriate mask is selected from a predefined set of pseudo-random masks by computing the Fourier transform of the raw data array as well as the data array multiplied by each mask. The data array or masked array with the lowest Fourier domain peak values is recorded. On readout, the recorded array is multiplied by the mask used during recording to recover the original data array. Simulations are presented that demonstrate the benefit of this approach, and provide insight into the appropriate number of phase masks to use in high capacity holographic data storage systems.

  16. Simulating European wind power generation applying statistical downscaling to reanalysis data

    International Nuclear Information System (INIS)

    González-Aparicio, I.; Monforti, F.; Volker, P.; Zucker, A.; Careri, F.; Huld, T.; Badger, J.

    2017-01-01

    Highlights: •Wind speed spatial resolution highly influences calculated wind power peaks and ramps. •Reduction of wind power generation uncertainties using statistical downscaling. •Publicly available dataset of wind power generation hourly time series at NUTS2. -- Abstract: The growing share of electricity production from solar and mainly wind resources constantly increases the stochastic nature of the power system. Modelling the high share of renewable energy sources – and in particular wind power – crucially depends on the adequate representation of the intermittency and characteristics of the wind resource which is related to the accuracy of the approach in converting wind speed data into power values. One of the main factors contributing to the uncertainty in these conversion methods is the selection of the spatial resolution. Although numerical weather prediction models can simulate wind speeds at higher spatial resolution (up to 1 × 1 km) than a reanalysis (generally, ranging from about 25 km to 70 km), they require high computational resources and massive storage systems: therefore, the most common alternative is to use the reanalysis data. However, local wind features could not be captured by the use of a reanalysis technique and could be translated into misinterpretations of the wind power peaks, ramping capacities, the behaviour of power prices, as well as bidding strategies for the electricity market. This study contributes to the understanding what is captured by different wind speeds spatial resolution datasets, the importance of using high resolution data for the conversion into power and the implications in power system analyses. It is proposed a methodology to increase the spatial resolution from a reanalysis. This study presents an open access renewable generation time series dataset for the EU-28 and neighbouring countries at hourly intervals and at different geographical aggregation levels (country, bidding zone and administrative

  17. Changepoint detection in base-resolution methylome data reveals a robust signature of methylated domain landscape.

    Science.gov (United States)

    Yokoyama, Takao; Miura, Fumihito; Araki, Hiromitsu; Okamura, Kohji; Ito, Takashi

    2015-08-12

    Base-resolution methylome data generated by whole-genome bisulfite sequencing (WGBS) is often used to segment the genome into domains with distinct methylation levels. However, most segmentation methods include many parameters to be carefully tuned and/or fail to exploit the unsurpassed resolution of the data. Furthermore, there is no simple method that displays the composition of the domains to grasp global trends in each methylome. We propose to use changepoint detection for domain demarcation based on base-resolution methylome data. While the proposed method segments the methylome in a largely comparable manner to conventional approaches, it has only a single parameter to be tuned. Furthermore, it fully exploits the base-resolution of the data to enable simultaneous detection of methylation changes in even contrasting size ranges, such as focal hypermethylation and global hypomethylation in cancer methylomes. We also propose a simple plot termed methylated domain landscape (MDL) that globally displays the size, the methylation level and the number of the domains thus defined, thereby enabling one to intuitively grasp trends in each methylome. Since the pattern of MDL often reflects cell lineages and is largely unaffected by data size, it can serve as a novel signature of methylome. Changepoint detection in base-resolution methylome data followed by MDL plotting provides a novel method for methylome characterization and will facilitate global comparison among various WGBS data differing in size and even species origin.

  18. The resolution of TOF low-Q diffractometers: Instrumental, data acquisition and reduction factors

    International Nuclear Information System (INIS)

    Hjelm, R.P. Jr.

    1988-01-01

    The resolution of scattering vector, Q, in small-angle neutron scattering (SANS) measurements derives from uncertainties in scattered neutron wavelength and direction. The manner in which these are manifest on brod-band time-of-flight (TOF) spectrometers at pulsed sources is different from that for instruments using monochromated sources. In TOF instruments the uncertainties arise from the TOF measurement as well as the directional uncertainties due to collimation, finite sample and detector-element size that are present in any small-angle scattering instrument. Further, data from a TOF instrument must be mapped into Q space, and the strategy used to accomplish this affects the final resolution of the measurement. Thus for TOF-SANS instruments the question of resolution is more complicated than for instruments on monochromated sources. There is considerable flexibility in TOF data acquisition and Q mapping that can be utilized to optimize for intensity and Q resolution requirements of a particular measurement. In this work, present understanding of the effects of instrument geometry, TOF data acquisition and Q mapping strategies on the precision of the measurement is outlined. The goal is to establish guidelines on the best manner in which a particular measurement can be set up. Toward this end some new aspects are presented of optimal Q-mapping procedures, the effect of inelastic scattering on the measurement, and the calculation of instrument resolution functions. Some of these ideas are tested by comparison of simulations with measurement. (orig.)

  19. Study of the dependence of resolution temporal activity for a Philips gemini TF PET/CT scanner by applying a statistical analysis of time series

    International Nuclear Information System (INIS)

    Sanchez Merino, G.; Cortes Rpdicio, J.; Lope Lope, R.; Martin Gonzalez, T.; Garcia Fidalgo, M. A.

    2013-01-01

    The aim of the present work is to study the dependence of temporal resolution with the activity using statistical techniques applied to the series of values time series measurements of temporal resolution during daily equipment checks. (Author)

  20. The dependence of bar frequency on galaxy mass, colour, and gas content - and angular resolution - in the local universe

    Science.gov (United States)

    Erwin, Peter

    2018-03-01

    I use distance- and mass-limited subsamples of the Spitzer Survey of Stellar Structure in Galaxies (S4G) to investigate how the presence of bars in spiral galaxies depends on mass, colour, and gas content and whether large, Sloan Digital Sky Survey (SDSS)-based investigations of bar frequencies agree with local data. Bar frequency reaches a maximum of fbar ≈ 0.70 at M⋆ ˜ 109.7M⊙, declining to both lower and higher masses. It is roughly constant over a wide range of colours (g - r ≈ 0.1-0.8) and atomic gas fractions (log (M_{H I}/ M_{\\star }) ≈ -2.5 to 1). Bars are thus as common in blue, gas-rich galaxies are they are in red, gas-poor galaxies. This is in sharp contrast to many SDSS-based studies of z ˜ 0.01-0.1 galaxies, which report fbar increasing strongly to higher masses (from M⋆ ˜ 1010 to 1011M⊙), redder colours, and lower gas fractions. The contradiction can be explained if SDSS-based studies preferentially miss bars in, and underestimate the bar fraction for, lower mass (bluer, gas-rich) galaxies due to poor spatial resolution and the correlation between bar size and stellar mass. Simulations of SDSS-style observations using the S4G galaxies as a parent sample, and assuming that bars below a threshold angular size of twice the point spread function full width at half-maximum cannot be identified, successfully reproduce typical SDSS fbar trends for stellar mass and gas mass ratio. Similar considerations may affect high-redshift studies, especially if bars grow in length over cosmic time; simulations suggest that high-redshift bar fractions may thus be systematically underestimated.

  1. ANL high resolution injector

    International Nuclear Information System (INIS)

    Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.

    1985-01-01

    The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs

  2. Faster Defect Resolution with Higher Technical Quality of Software

    NARCIS (Netherlands)

    Luijten, B.; Visser, J.

    2010-01-01

    We performed an empirical study of the relation between technical quality of software products and the defect resolution performance of their maintainers. In particular, we tested the hypothesis that ratings for source code maintainability, as employed by the SIG quality model, are correlated with

  3. Discrete ambiguity resolution and baryon-resonance parameter determination

    International Nuclear Information System (INIS)

    Chew, D.M; Urban, M.

    1978-04-01

    A partial-wave analysis was performed on elastic π + p data between 1400 and 2200 MeV, using principles of analyticity (to select and amalgamate data), causality and unitarity together with Barrelet zeros are the resonating waves between 1500 and 1800 MeV examined in detail, and it is shown how a new resolution of the discrete ambiguity gives, for the S31 and D33 resonances, different parameters than found in an earlier resolution using less accurate information. In either case, mass degeneracy of these resonances is observed in agreement with general considerations regarding smooth zero trajectories. 18 references

  4. Student Selection and Admission to Higher Education: Policies and Practices in the Asian Region.

    Science.gov (United States)

    Harman, Grant

    1994-01-01

    This article describes higher education student selection and admission policies and practices in newly industrialized countries in the Asian region, with particular attention to access, selection, the admissions process, equity, and relationship with the labor market. Policies in India, Indonesia, Malaysia, People's Republic of China, Singapore,…

  5. MASH Suite: a user-friendly and versatile software interface for high-resolution mass spectrometry data interpretation and visualization.

    Science.gov (United States)

    Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying

    2014-03-01

    The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.

  6. Higher-order tensors in diffusion imaging

    NARCIS (Netherlands)

    Schultz, T.; Fuster, A.; Ghosh, A.; Deriche, R.; Florack, L.M.J.; Lim, L.H.; Westin, C.-F.; Vilanova, A.; Burgeth, B.

    2014-01-01

    Diffusion imaging is a noninvasive tool for probing the microstructure of fibrous nerve and muscle tissue. Higher-order tensors provide a powerful mathematical language to model and analyze the large and complex data that is generated by its modern variants such as High Angular Resolution Diffusion

  7. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    Science.gov (United States)

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  8. Resolution effects and analysis of small-angle neutron scattering data

    DEFF Research Database (Denmark)

    Pedersen, J.S.

    1993-01-01

    A discussion of the instrumental smearing effects for small-angle neutron scattering (SANS) data sets is given. It is shown that these effects can be described by a resolution function, which describes the distribution of scattering vectors probed for the nominal values of the scattering vector...

  9. Mapping Impervious Surfaces Globally at 30m Resolution Using Landsat Global Land Survey Data

    Science.gov (United States)

    Brown de Colstoun, E.; Huang, C.; Wolfe, R. E.; Tan, B.; Tilton, J.; Smith, S.; Phillips, J.; Wang, P.; Ling, P.; Zhan, J.; Xu, X.; Taylor, M. P.

    2013-12-01

    Impervious surfaces, mainly artificial structures and roads, cover less than 1% of the world's land surface (1.3% over USA). Regardless of the relatively small coverage, impervious surfaces have a significant impact on the environment. They are the main source of the urban heat island effect, and affect not only the energy balance, but also hydrology and carbon cycling, and both land and aquatic ecosystem services. In the last several decades, the pace of converting natural land surface to impervious surfaces has increased. Quantitatively monitoring the growth of impervious surface expansion and associated urbanization has become a priority topic across both the physical and social sciences. The recent availability of consistent, global scale data sets at 30m resolution such as the Global Land Survey from the Landsat satellites provides an unprecedented opportunity to map global impervious cover and urbanization at this resolution for the first time, with unprecedented detail and accuracy. Moreover, the spatial resolution of Landsat is absolutely essential to accurately resolve urban targets such a buildings, roads and parking lots. With long term GLS data now available for the 1975, 1990, 2000, 2005 and 2010 time periods, the land cover/use changes due to urbanization can now be quantified at this spatial scale as well. In the Global Land Survey - Imperviousness Mapping Project (GLS-IMP), we are producing the first global 30 m spatial resolution impervious cover data set. We have processed the GLS 2010 data set to surface reflectance (8500+ TM and ETM+ scenes) and are using a supervised classification method using a regression tree to produce continental scale impervious cover data sets. A very large set of accurate training samples is the key to the supervised classifications and is being derived through the interpretation of high spatial resolution (~2 m or less) commercial satellite data (Quickbird and Worldview2) available to us through the unclassified

  10. Plastid: nucleotide-resolution analysis of next-generation sequencing and genomics data.

    Science.gov (United States)

    Dunn, Joshua G; Weissman, Jonathan S

    2016-11-22

    Next-generation sequencing (NGS) informs many biological questions with unprecedented depth and nucleotide resolution. These assays have created a need for analytical tools that enable users to manipulate data nucleotide-by-nucleotide robustly and easily. Furthermore, because many NGS assays encode information jointly within multiple properties of read alignments - for example, in ribosome profiling, the locations of ribosomes are jointly encoded in alignment coordinates and length - analytical tools are often required to extract the biological meaning from the alignments before analysis. Many assay-specific pipelines exist for this purpose, but there remains a need for user-friendly, generalized, nucleotide-resolution tools that are not limited to specific experimental regimes or analytical workflows. Plastid is a Python library designed specifically for nucleotide-resolution analysis of genomics and NGS data. As such, Plastid is designed to extract assay-specific information from read alignments while retaining generality and extensibility to novel NGS assays. Plastid represents NGS and other biological data as arrays of values associated with genomic or transcriptomic positions, and contains configurable tools to convert data from a variety of sources to such arrays. Plastid also includes numerous tools to manipulate even discontinuous genomic features, such as spliced transcripts, with nucleotide precision. Plastid automatically handles conversion between genomic and feature-centric coordinates, accounting for splicing and strand, freeing users of burdensome accounting. Finally, Plastid's data models use consistent and familiar biological idioms, enabling even beginners to develop sophisticated analytical workflows with minimal effort. Plastid is a versatile toolkit that has been used to analyze data from multiple NGS assays, including RNA-seq, ribosome profiling, and DMS-seq. It forms the genomic engine of our ORF annotation tool, ORF-RATER, and is readily

  11. Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution

    Science.gov (United States)

    Wild, O.; Prather, M. J.

    2005-12-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.

  12. Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies

    Science.gov (United States)

    Perez Hoyos, Isabel Cristina

    The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction

  13. Statistical method evaluation for differentially methylated CpGs in base resolution next-generation DNA sequencing data.

    Science.gov (United States)

    Zhang, Yun; Baheti, Saurabh; Sun, Zhifu

    2018-05-01

    High-throughput bisulfite methylation sequencing such as reduced representation bisulfite sequencing (RRBS), Agilent SureSelect Human Methyl-Seq (Methyl-seq) or whole-genome bisulfite sequencing is commonly used for base resolution methylome research. These data are represented either by the ratio of methylated cytosine versus total coverage at a CpG site or numbers of methylated and unmethylated cytosines. Multiple statistical methods can be used to detect differentially methylated CpGs (DMCs) between conditions, and these methods are often the base for the next step of differentially methylated region identification. The ratio data have a flexibility of fitting to many linear models, but the raw count data take consideration of coverage information. There is an array of options in each datatype for DMC detection; however, it is not clear which is an optimal statistical method. In this study, we systematically evaluated four statistic methods on methylation ratio data and four methods on count-based data and compared their performances with regard to type I error control, sensitivity and specificity of DMC detection and computational resource demands using real RRBS data along with simulation. Our results show that the ratio-based tests are generally more conservative (less sensitive) than the count-based tests. However, some count-based methods have high false-positive rates and should be avoided. The beta-binomial model gives a good balance between sensitivity and specificity and is preferred method. Selection of methods in different settings, signal versus noise and sample size estimation are also discussed.

  14. Time-dependent amplitude analysis of $B^0 \\to K^0_S\\pi^ pi^-$

    Energy Technology Data Exchange (ETDEWEB)

    Aubert, B.

    2009-05-26

    In this paper we present results from a time-dependent amplitude analysis of the B{sup 0} {yields} K{sup 0}{sub s}{pi}{sup +}{pi}{sup -} decay. In Sec. II we describe the time-dependent DP formalism, and introduce the signal parameters that are extracted in the fit to data. In Sec. III we briefly describe the BABAR detector and the data set. In Sec. IV, we explain the selection requirements used to obtain the signal candidates and suppress backgrounds. In Sec. V we describe the fit method and the approach used to control experimental effects such as resolution. In Sec. VI we present the results of the fit, and extract parameters relevant to the contributing intermediate resonant states. In Sec. VII we discuss systematic uncertainties in the results, and finally we summarize the results in Sec. VIII.

  15. Using Historical Atlas Data to Develop High-Resolution Distribution Models of Freshwater Fishes.

    Directory of Open Access Journals (Sweden)

    Jian Huang

    Full Text Available Understanding the spatial pattern of species distributions is fundamental in biogeography, and conservation and resource management applications. Most species distribution models (SDMs require or prefer species presence and absence data for adequate estimation of model parameters. However, observations with unreliable or unreported species absences dominate and limit the implementation of SDMs. Presence-only models generally yield less accurate predictions of species distribution, and make it difficult to incorporate spatial autocorrelation. The availability of large amounts of historical presence records for freshwater fishes of the United States provides an opportunity for deriving reliable absences from data reported as presence-only, when sampling was predominantly community-based. In this study, we used boosted regression trees (BRT, logistic regression, and MaxEnt models to assess the performance of a historical metacommunity database with inferred absences, for modeling fish distributions, investigating the effect of model choice and data properties thereby. With models of the distribution of 76 native, non-game fish species of varied traits and rarity attributes in four river basins across the United States, we show that model accuracy depends on data quality (e.g., sample size, location precision, species' rarity, statistical modeling technique, and consideration of spatial autocorrelation. The cross-validation area under the receiver-operating-characteristic curve (AUC tended to be high in the spatial presence-absence models at the highest level of resolution for species with large geographic ranges and small local populations. Prevalence affected training but not validation AUC. The key habitat predictors identified and the fish-habitat relationships evaluated through partial dependence plots corroborated most previous studies. The community-based SDM framework broadens our capability to model species distributions by innovatively

  16. Dependent failure analysis of NPP data bases

    International Nuclear Information System (INIS)

    Cooper, S.E.; Lofgren, E.V.; Samanta, P.K.; Wong Seemeng

    1993-01-01

    A technical approach for analyzing plant-specific data bases for vulnerabilities to dependent failures has been developed and applied. Since the focus of this work is to aid in the formulation of defenses to dependent failures, rather than to quantify dependent failure probabilities, the approach of this analysis is critically different. For instance, the determination of component failure dependencies has been based upon identical failure mechanisms related to component piecepart failures, rather than failure modes. Also, component failures involving all types of component function loss (e.g., catastrophic, degraded, incipient) are equally important to the predictive purposes of dependent failure defense development. Consequently, dependent component failures are identified with a different dependent failure definition which uses a component failure mechanism categorization scheme in this study. In this context, clusters of component failures which satisfy the revised dependent failure definition are termed common failure mechanism (CFM) events. Motor-operated valves (MOVs) in two nuclear power plant data bases have been analyzed with this approach. The analysis results include seven different failure mechanism categories; identified potential CFM events; an assessment of the risk-significance of the potential CFM events using existing probabilistic risk assessments (PRAs); and postulated defenses to the identified potential CFM events. (orig.)

  17. Identity and intimacy crises and their relationship to internet dependence among college students.

    Science.gov (United States)

    Huang, Ya-Rong

    2006-10-01

    In an attempt to test Kandell's proposition that internet dependents used the internet as a coping mechanism against underlying psychological issues, this study investigated the extent to which the fifth and sixth Eriksonian crises (identity, intimacy), were related to internet dependence (online chatting, gaming) among college students. Students spending more than 10 hours per week on chatting/gaming were classified as dependents. On the basis of a national sample of freshmen in Taiwan, this study found that the dependents scored significantly lower on most of the measures that reflected the successful resolution of the crises, and higher on the measures that reflected unsuccessful resolution of the crises. Kandell's proposition was supported.

  18. Resolution enhancement of slam using transverse wave

    International Nuclear Information System (INIS)

    Ko, Dae Sik; Moon, Gun; Kim, Young H.

    1997-01-01

    We studied the resolution enhancement of a novel scanning laser acoustic microscope (SLAM) using transverse waves. Mode conversion of the ultrasonic wave takes place at the liquid-solid interface and some energy of the insonifying longitudinal waves in the water will convert to transverse wave energy within the solid specimen. The resolution of SLAM depends on the size of detecting laser spot and the wavelength of the insonifying ultrasonic waves. Since the wavelength of the transverse wave is shorter than that of the longitudinal wave, we are able to achieve the high resolution by using transverse waves. In order to operate SLAM in the transverse wave mode, we made wedge for changing the incident angle. Our experimental results with model 2140 SLAM and an aluminum specimen showed higher contrast of the SLAM Image In the transverse wave mode than that in the longitudinal wave mode.

  19. Distracted and confused?: selective attention under load.

    Science.gov (United States)

    Lavie, Nilli

    2005-02-01

    The ability to remain focused on goal-relevant stimuli in the presence of potentially interfering distractors is crucial for any coherent cognitive function. However, simply instructing people to ignore goal-irrelevant stimuli is not sufficient for preventing their processing. Recent research reveals that distractor processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on "frontal" cognitive control processes increases distractor processing. These findings provide a resolution to the long-standing early and late selection debate within a load theory of attention that accommodates behavioural and neuroimaging data within a framework that integrates attention research with executive function.

  20. Advances in Global Adjoint Tomography -- Massive Data Assimilation

    Science.gov (United States)

    Ruan, Y.; Lei, W.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Krischer, L.; Tromp, J.

    2015-12-01

    Azimuthal anisotropy and anelasticity are key to understanding a myriad of processes in Earth's interior. Resolving these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models and an iterative inversion strategy. In the wake of successes in regional studies(e.g., Chen et al., 2007; Tape et al., 2009, 2010; Fichtner et al., 2009, 2010; Chen et al.,2010; Zhu et al., 2012, 2013; Chen et al., 2015), we are employing adjoint tomography based on a spectral-element method (Komatitsch & Tromp 1999, 2002) on a global scale using the supercomputer ''Titan'' at Oak Ridge National Laboratory. After 15 iterations, we have obtained a high-resolution transversely isotropic Earth model (M15) using traveltime data from 253 earthquakes. To obtain higher resolution images of the emerging new features and to prepare the inversion for azimuthal anisotropy and anelasticity, we expanded the original dataset with approximately 4,220 additional global earthquakes (Mw5.5-7.0) --occurring between 1995 and 2014-- and downloaded 300-minute-long time series for all available data archived at the IRIS Data Management Center, ORFEUS, and F-net. Ocean Bottom Seismograph data from the last decade are also included to maximize data coverage. In order to handle the huge dataset and solve the I/O bottleneck in global adjoint tomography, we implemented a python-based parallel data processing workflow based on the newly developed Adaptable Seismic Data Format (ASDF). With the help of the data selection tool MUSTANG developed by IRIS, we cleaned our dataset and assembled event-based ASDF files for parallel processing. We have started Centroid Moment Tensors (CMT) inversions for all 4,220 earthquakes with the latest model M15, and selected high-quality data for measurement. We will statistically investigate each channel using synthetic seismograms calculated in M15 for updated CMTs and identify problematic channels. In addition to data screening, we also modified

  1. Combining photorealistic immersive geovisualization and high-resolution geospatial data to enhance human-scale viewshed modelling

    Science.gov (United States)

    Tabrizian, P.; Petrasova, A.; Baran, P.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.

    2017-12-01

    Viewshed modelling- a process of defining, parsing and analysis of landscape visual space's structure within GIS- has been commonly used in applications ranging from landscape planning and ecosystem services assessment to geography and archaeology. However, less effort has been made to understand whether and to what extent these objective analyses predict actual on-the-ground perception of human observer. Moreover, viewshed modelling at the human-scale level require incorporation of fine-grained landscape structure (eg., vegetation) and patterns (e.g, landcover) that are typically omitted from visibility calculations or unrealistically simulated leading to significant error in predicting visual attributes. This poster illustrates how photorealistic Immersive Virtual Environments and high-resolution geospatial data can be used to integrate objective and subjective assessments of visual characteristics at the human-scale level. We performed viewshed modelling for a systematically sampled set of viewpoints (N=340) across an urban park using open-source GIS (GRASS GIS). For each point a binary viewshed was computed on a 3D surface model derived from high-density leaf-off LIDAR (QL2) points. Viewshed map was combined with high-resolution landcover (.5m) derived through fusion of orthoimagery, lidar vegetation, and vector data. Geo-statistics and landscape structure analysis was performed to compute topological and compositional metrics for visual-scale (e.g., openness), complexity (pattern, shape and object diversity), and naturalness. Based on the viewshed model output, a sample of 24 viewpoints representing the variation of visual characteristics were selected and geolocated. For each location, 360o imagery were captured using a DSL camera mounted on a GIGA PAN robot. We programmed a virtual reality application through which human subjects (N=100) immersively experienced a random representation of selected environments via a head-mounted display (Oculus Rift CV1), and

  2. Do the Available Data Permit Clarifcation of the Possible Dependence of Palaeozoic Brachiopod Generic Diversity Dynamics on Global Sea-Level Changes? A Viewpoint

    Directory of Open Access Journals (Sweden)

    Ruban Dmitry A.

    2014-10-01

    Full Text Available At a glance, progress in palaeontology and eustatic reconstructions in the past decade permits to prove or to disprove the possible dependence of Palaeozoic brachiopod generic diversity dynamics on global sea-level changes. However, the available diversity curve is of much lower resolution than the eustatic curve. This problem can be resolved by decreasing the resolution of the latter. The other restriction linked to the chronostratigraphical incompatibility of the available data allows to focus on the Middle Palaeozoic only. A series of mass extinctions and other biotic crises in the Silurian-Devonian does not allow to interpret correctly the results of direct comparison of the brachiopod generic diversity dynamics with global sea-level changes. With the available data, it is only possible to hypothesize that the eustatic control was not playing a major part in diversity dynamics of Middle Palaeozoic brachiopods. The resolution of the stratigraphic ranges of Palaeozoic brachiopods should be increased signifcantly, and these ranges should be plotted against the most up-to-date geologic time scale. Until this task will be achieved, it is impossible to judge about the existence of any dependence (either full or partial of the Palaeozoic brachiopod diversity dynamics on global sea-level changes.

  3. A multi-channel data acquisition system with high resolution based on microcomputer

    International Nuclear Information System (INIS)

    An Qi; Wang Yanfang; Xing Tao

    1995-01-01

    The paper introduces the principle of a multi-channel data acquisition system with high resolution based on the microcomputer.The system consists of five parts.They are analog-to-digital converter, data buffer area, trigger logic circuit, control circuit, and digital-to-analog converter

  4. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  5. Empirical-statistical downscaling of reanalysis data to high-resolution air temperature and specific humidity above a glacier surface (Cordillera Blanca, Peru)

    Science.gov (United States)

    Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-06-01

    Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.

  6. Integrating heterogeneous earth observation data for assessment of high-resolution inundation boundaries generated during flood emergencies.

    Science.gov (United States)

    Sava, E.; Cervone, G.; Kalyanapu, A. J.; Sampson, K. M.

    2017-12-01

    The increasing trend in flooding events, paired with rapid urbanization and an aging infrastructure is projected to enhance the risk of catastrophic losses and increase the frequency of both flash and large area floods. During such events, it is critical for decision makers and emergency responders to have access to timely actionable knowledge regarding preparedness, emergency response, and recovery before, during and after a disaster. Large volumes of data sets derived from sophisticated sensors, mobile phones, and social media feeds are increasingly being used to improve citizen services and provide clues to the best way to respond to emergencies through the use of visualization and GIS mapping. Such data, coupled with recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed decision makers to more efficiently extract precise and relevant knowledge and better understand how damage caused by disasters have real time effects on urban population. This research assesses the feasibility of integrating multiple sources of contributed data into hydrodynamic models for flood inundation simulation and estimating damage assessment. It integrates multiple sources of high-resolution physiographic data such as satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and `during-event' social media observations of flood inundation in order to improve the identification of flood mapping. The goal is to augment remote sensing imagery with new open-source datasets to generate flood extend maps at higher temporal and spatial resolution. The proposed methodology is applied on two test cases, relative to the 2013 Boulder Colorado flood and the 2015 floods in Texas.

  7. Recording the LHCb data and software dependencies

    Science.gov (United States)

    Trisovic, Ana; Couturier, Ben; Gibson, Val; Jones, Chris

    2017-10-01

    In recent years awareness of the importance of preserving the experimental data and scientific software at CERN has been rising. To support this effort, we are presenting a novel approach to structure dependencies of the LHCb data and software to make it more accessible in the long-term future. In this paper, we detail the implementation of a graph database of these dependencies. We list the implications that can be deduced from the graph mining (such as a search for the legacy software), with emphasis on data preservation. Furthermore, we introduce a methodology of recreating the LHCb data, thus supporting reproducible research and data stewardship. Finally, we describe how this information is made available to the users on a web portal that promotes data and analysis preservation and good practise with analysis documentation.

  8. A Radar Climatology for Germany - a 16-year high resolution precipitation data and its possibilities

    Science.gov (United States)

    Walawender, Ewelina; Winterrath, Tanja; Brendel, Christoph; Hafer, Mario; Junghänel, Thomas; Klameth, Anna; Weigl, Elmar; Becker, Andreas

    2017-04-01

    range of spatial analyses: from country to city scale. Multiple events can be investigated in details, depending on the user needs, as temporal resolution differs from 15 years to 1 hour. Apart from standard products such as precipitation sum, the radar climatology will provide its derivatives as well e.g. extreme precipitation characteristics and rain erosivity potential (R factor) map. Employing GIS functionalities into the Radar Climatology dataset has made it universal and interoperable - suitable for integration with a wide range of other geodata formats or services. It can be treated also as input layer for further analyses which demand spatially continuous data on precipitation and for building more integrated products tailored to the user needs. One of the most important concepts may be an application of the Radar Climatology data as a key factor in risk assessment analysis and developing strategies for risk management in urban planning, hydrology, agriculture etc.

  9. Spatial Resolution Assessment of the Telops Airborne TIR Imagery

    Science.gov (United States)

    Mousakhani, S.; Eslami, M.; Saadatseresht, M.

    2017-09-01

    Having a high spatial resolution of Thermal InfraRed (TIR) Sensors is a challenge in remote sensing applications. Airborne high spatial resolution TIR is a novel source of data that became available lately. Recent developments in spatial resolution of the TIR sensors have been an interesting topic for scientists. TIR sensors are very sensitive to the energies emitted from objects. Past researches have been shown that increasing the spatial resolution of an airborne image will decrease the spectral content of the data and will reduce the Signal to Noise Ratio (SNR). Therefore, in this paper a comprehensive assessment is adapted to estimate an appropriate spatial resolution of the TIR data (TELOPS TIR data), in consideration of the SNR. So, firstly, a low-pass filter is applied on TIR data and the achieved products fed to a classification method for analysing of the accuracy improvement. The obtained results show that, there is no significant change in classification accuracy by applying low-pass filter. Furthermore, estimation of the appropriate spatial resolution of the TIR data is evaluated for obtaining higher spectral content and SNR. For this purpose, different resolutions of the TIR data are created and fed to the maximum likelihood classification method separately. The results illustrated in the case of using images with ground pixel size four times greater than the original image, the classification accuracy is not reduced. Also, SNR and spectral contents are improved. But the corners sharpening is declined.

  10. Mechanistic insights into selective killing of OXPHOS-dependent cancer cells by arctigenin.

    Science.gov (United States)

    Brecht, Karin; Riebel, Virginie; Couttet, Philippe; Paech, Franziska; Wolf, Armin; Chibout, Salah-Dine; Pognan, Francois; Krähenbühl, Stephan; Uteng, Marianne

    2017-04-01

    Arctigenin has previously been identified as a potential anti-tumor treatment for advanced pancreatic cancer. However, the mechanism of how arctigenin kills cancer cells is not fully understood. In the present work we studied the mechanism of toxicity by arctigenin in the human pancreatic cell line, Panc-1, with special emphasis on the mitochondria. A comparison of Panc-1 cells cultured in glucose versus galactose medium was applied, allowing assessments of effects in glycolytic versus oxidative phosphorylation (OXPHOS)-dependent Panc-1 cells. For control purposes, the mitochondrial toxic response to treatment with arctigenin was compared to the anti-cancer drug, sorafenib, which is a tyrosine kinase inhibitor known for mitochondrial toxic off-target effects (Will et al., 2008). In both Panc-1 OXPHOS-dependent and glycolytic cells, arctigenin dissipated the mitochondrial membrane potential, which was demonstrated to be due to inhibition of the mitochondrial complexes II and IV. However, arctigenin selectively killed only the OXPHOS-dependent Panc-1 cells. This selective killing of OXPHOS-dependent Panc-1 cells was accompanied by generation of ER stress, mitochondrial membrane permeabilization and caspase activation leading to apoptosis and aponecrosis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Multilevel Cross-Dependent Binary Longitudinal Data

    KAUST Repository

    Serban, Nicoleta

    2013-10-16

    We provide insights into new methodology for the analysis of multilevel binary data observed longitudinally, when the repeated longitudinal measurements are correlated. The proposed model is logistic functional regression conditioned on three latent processes describing the within- and between-variability, and describing the cross-dependence of the repeated longitudinal measurements. We estimate the model components without employing mixed-effects modeling but assuming an approximation to the logistic link function. The primary objectives of this article are to highlight the challenges in the estimation of the model components, to compare two approximations to the logistic regression function, linear and exponential, and to discuss their advantages and limitations. The linear approximation is computationally efficient whereas the exponential approximation applies for rare events functional data. Our methods are inspired by and applied to a scientific experiment on spectral backscatter from long range infrared light detection and ranging (LIDAR) data. The models are general and relevant to many new binary functional data sets, with or without dependence between repeated functional measurements.

  12. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    International Nuclear Information System (INIS)

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)

  13. Restoration and Super-Resolution of Diffraction-Limited Imagery Data by Bayesian and Set-Theoretic Approaches

    National Research Council Canada - National Science Library

    Sundareshan, Malur

    2001-01-01

    This project was primarily aimed at the design of novel algorithms for the restoration and super-resolution processing of imagery data to improve the resolution in images acquired from practical sensing operations...

  14. High resolution remote sensing for reducing uncertainties in urban forest carbon offset life cycle assessments.

    Science.gov (United States)

    Tigges, Jan; Lakes, Tobia

    2017-10-04

    Urban forests reduce greenhouse gas emissions by storing and sequestering considerable amounts of carbon. However, few studies have considered the local scale of urban forests to effectively evaluate their potential long-term carbon offset. The lack of precise, consistent and up-to-date forest details is challenging for long-term prognoses. Therefore, this review aims to identify uncertainties in urban forest carbon offset assessment and discuss the extent to which such uncertainties can be reduced by recent progress in high resolution remote sensing. We do this by performing an extensive literature review and a case study combining remote sensing and life cycle assessment of urban forest carbon offset in Berlin, Germany. Recent progress in high resolution remote sensing and methods is adequate for delivering more precise details on the urban tree canopy, individual tree metrics, species, and age structures compared to conventional land use/cover class approaches. These area-wide consistent details can update life cycle inventories for more precise future prognoses. Additional improvements in classification accuracy can be achieved by a higher number of features derived from remote sensing data of increasing resolution, but first studies on this subject indicated that a smart selection of features already provides sufficient data that avoids redundancies and enables more efficient data processing. Our case study from Berlin could use remotely sensed individual tree species as consistent inventory of a life cycle assessment. However, a lack of growth, mortality and planting data forced us to make assumptions, therefore creating uncertainty in the long-term prognoses. Regarding temporal changes and reliable long-term estimates, more attention is required to detect changes of gradual growth, pruning and abrupt changes in tree planting and mortality. As such, precise long-term urban ecological monitoring using high resolution remote sensing should be intensified

  15. Density dependence triggers runaway selection of reduced senescence.

    Directory of Open Access Journals (Sweden)

    Robert M Seymour

    2007-12-01

    Full Text Available In the presence of exogenous mortality risks, future reproduction by an individual is worth less than present reproduction to its fitness. Senescent aging thus results inevitably from transferring net fertility into younger ages. Some long-lived organisms appear to defy theory, however, presenting negligible senescence (e.g., hydra and extended lifespans (e.g., Bristlecone Pine. Here, we investigate the possibility that the onset of vitality loss can be delayed indefinitely, even accepting the abundant evidence that reproduction is intrinsically costly to survival. For an environment with constant hazard, we establish that natural selection itself contributes to increasing density-dependent recruitment losses. We then develop a generalized model of accelerating vitality loss for analyzing fitness optima as a tradeoff between compression and spread in the age profile of net fertility. Across a realistic spectrum of senescent age profiles, density regulation of recruitment can trigger runaway selection for ever-reducing senescence. This novel prediction applies without requirement for special life-history characteristics such as indeterminate somatic growth or increasing fecundity with age. The evolution of nonsenescence from senescence is robust to the presence of exogenous adult mortality, which tends instead to increase the age-independent component of vitality loss. We simulate examples of runaway selection leading to negligible senescence and even intrinsic immortality.

  16. Measuring Teaching Quality in Higher Education: Assessing Selection Bias in Course Evaluations

    Science.gov (United States)

    Goos, Maarten; Salomons, Anna

    2017-01-01

    Student evaluations of teaching (SETs) are widely used to measure teaching quality in higher education and compare it across different courses, teachers, departments and institutions. Indeed, SETs are of increasing importance for teacher promotion decisions, student course selection, as well as for auditing practices demonstrating institutional…

  17. Analyzing Snowpack Metrics Over Large Spatial Extents Using Calibrated, Enhanced-Resolution Brightness Temperature Data and Long Short Term Memory Artificial Neural Networks

    Science.gov (United States)

    Norris, W.; J Q Farmer, C.

    2017-12-01

    Snow water equivalence (SWE) is a difficult metric to measure accurately over large spatial extents; snow-tell sites are too localized, and traditional remotely sensed brightness temperature data is at too coarse of a resolution to capture variation. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) data from the National Snow and Ice Data Center (NSIDC) offers remotely sensed brightness temperature data at an enhanced resolution of 3.125 km versus the original 25 km, which allows for large spatial extents to be analyzed with reduced uncertainty compared to the 25km product. While the 25km brightness temperature data has proved useful in past research — one group found decreasing trends in SWE outweighed increasing trends three to one in North America; other researchers used the data to incorporate winter conditions, like snow cover, into ecological zoning criterion — with the new 3.125 km data, it is possible to derive more accurate metrics for SWE, since we have far more spatial variability in measurements. Even with higher resolution data, using the 37 - 19 GHz frequencies to estimate SWE distorts the data during times of melt onset and accumulation onset. Past researchers employed statistical splines, while other successful attempts utilized non-parametric curve fitting to smooth out spikes distorting metrics. In this work, rather than using legacy curve fitting techniques, a Long Short Term Memory (LSTM) Artificial Neural Network (ANN) was trained to perform curve fitting on the data. LSTM ANN have shown great promise in modeling time series data, and with almost 40 years of data available — 14,235 days — there is plenty of training data for the ANN. LSTM's are ideal for this type of time series analysis because they allow important trends to persist for long periods of time, but ignore short term fluctuations; since LSTM's have poor mid- to short-term memory, they are ideal for smoothing out the large spikes generated in the melt

  18. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  19. THE ANALYSIS OF HIGH SCHOOL STUDENTS`BEHAVIOUR IN THE SELECTION OF HIGHER EDUCATION INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Irina SUSANU

    2014-06-01

    Full Text Available The paper is to examine the Romanian education system and it focuses on the most important aspects of the education marketing and marketing research. A survey instrument was designed to include the research upon high school student’s behavior in selecting a higher education institution. The results shown that the Romanian education system has some drawbacks, the most important being the weak implementation of marketing in the education institutions. Therefore, the purpose of the marketing researches is to establish a connection between the public which education services are dedicated to and the necessary information used to select a higher education institution.

  20. Resolution of the neutron diffractometer of the Mexican Nuclear Center; Resolucion del difractometro de neutrones del Centro Nuclear de Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Macias B, L.R. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico); Garcia C, R.M. [Administracion Central de Laboratorio y Servicios Cientificos, Legaria 608, Col. Irrigacion, 11500 Mexico D.F. (Mexico); Ita T, A. De [UAM-A, San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico)

    2003-07-01

    The neutron diffractometer has three collimators and a monochromator of which it depends the resolution of the same one and exists a commitment between the resolution of the diffractometer and its intensity; if it is sought to work with more resolution, the intensity will diminish, and also, if one has little volume of the material, the diffracted light it is diminished, so the selection of the values of the collimators is this way important to have an unique value of the resolution of the diffractometer. (Author)

  1. Operationalising UN security council resolution 1540: an overview of select practical activities in the chemical and biological weapon-related areas

    International Nuclear Information System (INIS)

    Hart, J.

    2009-01-01

    The UN member states are continuing to take measures to inter alia establish and effectively implement controls to prevent the proliferation of nuclear, biological and chemical weapons and their means of delivery in accordance with United Nations Security Council Resolution 1540 (2004). The resolution also encourages enhanced international cooperation on such efforts, including by working through the 1540 Committee. Most analyses on the implementation of the resolution have focused on nuclear issues. This presentation provides an overview of select practical activities in the chemical and biological weapon-related areas, including chemical product classification and identification, biosafety and biosecurity practices and criminal prosecutions for unauthorised chemical transfers.(author)

  2. Boundaries of mass resolution in native mass spectrometry.

    Science.gov (United States)

    Lössl, Philip; Snijder, Joost; Heck, Albert J R

    2014-06-01

    Over the last two decades, native mass spectrometry (MS) has emerged as a valuable tool to study intact proteins and noncovalent protein complexes. Studied experimental systems range from small-molecule (drug)-protein interactions, to nanomachineries such as the proteasome and ribosome, to even virus assembly. In native MS, ions attain high m/z values, requiring special mass analyzers for their detection. Depending on the particular mass analyzer used, instrumental mass resolution does often decrease at higher m/z but can still be above a couple of thousand at m/z 5000. However, the mass resolving power obtained on charge states of protein complexes in this m/z region is experimentally found to remain well below the inherent instrument resolution of the mass analyzers employed. Here, we inquire into reasons for this discrepancy and ask how native MS would benefit from higher instrumental mass resolution. To answer this question, we discuss advantages and shortcomings of mass analyzers used to study intact biomolecules and biomolecular complexes in their native state, and we review which other factors determine mass resolving power in native MS analyses. Recent examples from the literature are given to illustrate the current status and limitations.

  3. Combining high resolution water use data from smart meters with remote sensing and geospatial datasets to investigate outdoor water demand and greenness changes during drought

    Science.gov (United States)

    Quesnel, K.; Ajami, N.; Urata, J.; Marx, A.

    2017-12-01

    Infrastructure modernization, information technology, and the internet of things are impacting urban water use. Advanced metering infrastructure (AMI), also known as smart meters, is one forthcoming technology that holds the potential to fundamentally shift the way customers use water and utilities manage their water resources. Broadly defined, AMI is a system and process used to measure, communicate, and analyze water use data at high resolution intervals at the customer or sub-customer level. There are many promising benefits of AMI systems, but there are also many challenges; consequently, AMI in the water sector is still in its infancy. In this study we provide insights into this emerging technology by taking advantage of the higher temporal and spatial resolution of water use data provided by these systems. We couple daily water use observations from AMI with monthly and bimonthly billing records to investigate water use trends, patterns, and drivers using a case study of the City of Redwood City, CA from 2007 through 2016. We look across sectors, with a particular focus on water use for urban irrigation. Almost half of Redwood City's irrigation accounts use recycled water, and we take this unique opportunity to investigate if the behavioral response for recycled water follows the water and energy efficiency paradox in which customers who have upgraded to more efficient devices end up using more of the commodity. We model potable and recycled water demand using geospatially explicit climate, demographic, and economic factors to gain insight into various water use drivers. Additionally, we use high resolution remote sensing data from the National Agricultural Imaging Program (NAIP) to observe how changes in greenness and impervious surface are related to water use. Using a series of statistical and unsupervised machine learning techniques, we find that water use has changed dramatically over the past decade corresponding to varying climatic regimes and drought

  4. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  5. Development of a highly selective muon trigger exploiting the high spatial resolution of monitored drift-tube chambers for the ATLAS experiment at the HL-LHC

    CERN Document Server

    Kortner, Oliver; The ATLAS collaboration

    2018-01-01

    The High-Luminosity LHC will provide the unique opportunity to explore the nature of physics beyond the Standard Model. Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC, where the instantaneous luminosity will exceed the LHC design instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum muons, selected due to the moderate momentum resolution of the current system. This first level trigger limitation can be overcome by including data from the precision muon drift tube (MDT) chambers. This requires the fast continuous transfer of the MDT hits to the off-detector trigger logic and a fast track reconstruction algorithm performed in the trigger logic. The feasibility of this approach was studied with LHC collision data and simulated data. Two main options for the hardware implementation will be studied with demonstrators: an FPGA based option with an embedded ARM microprocessor ...

  6. Development of a Highly Selective Muon Trigger Exploiting the High Spatial Resolution of Monitored Drift-Tube Chambers for the ATLAS Experiment at the HL-LHC

    CERN Document Server

    Kortner, Oliver; The ATLAS collaboration

    2018-01-01

    The High-Luminosity LHC will provide the unique opportunity to explore the nature of physics beyond the Standard Model. Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC, where the instantaneous luminosity will exceed the LHC design instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum muons, selected due to the moderate momentum resolution of the current system. This first level trigger limitation can be overcome by including data from the precision muon drift tube (MDT) chambers. This requires the fast continuous transfer of the MDT hits to the off-detector trigger logic and a fast track reconstruction algorithm performed in the trigger logic. The feasibility of this approach was studied with LHC collision data and simulated data. Two main options for the hardware implementation are currently studied with demonstrators, an FPGA based option with an embedded ARM microproc...

  7. High-Resolution Discharge Forecasting for Snowmelt and Rainfall Mixed Events

    Directory of Open Access Journals (Sweden)

    Tomasz Berezowski

    2018-01-01

    Full Text Available Discharge events induced by mixture of snowmelt and rainfall are strongly nonlinear due to consequences of rain-on-snow phenomena and snowmelt dependence on energy balance. However, they received relatively little attention, especially in high-resolution discharge forecasting. In this study, we use Random Forests models for 24 h discharge forecasting in 1 h resolution in a 105.9 km 2 urbanized catchment in NE Poland: Biala River. The forcing data are delivered by Weather Research and Forecasting (WRF model in 1 h temporal and 4 × 4 km spatial resolutions. The discharge forecasting models are set in two scenarios with snowmelt and rainfall and rainfall only predictors in order to highlight the effect of snowmelt on the results (both scenarios use also pre-forecast discharge based predictors. We show that inclusion of snowmelt decrease the forecast errors for longer forecasts’ lead times. Moreover, importance of discharge based predictors is higher in the rainfall only models then in the snowmelt and rainfall models. We conclude that the role of snowmelt for discharge forecasting in mixed snowmelt and rainfall environments is in accounting for nonlinear physical processes, such as initial wetting and rain on snow, which cannot be properly modelled by rainfall only.

  8. Wp index: A new substorm index derived from high-resolution geomagnetic field data at low latitude

    DEFF Research Database (Denmark)

    Nose, M.; Iyemori, T.; Wang, L.

    2012-01-01

    Geomagnetic field data with high time resolution (typically 1 s) have recently become more commonly acquired by ground stations. Such high time resolution data enable identifying Pi2 pulsations which have periods of 40-150 s and irregular (damped) waveforms. It is well-known that pulsations of th...

  9. High angular resolution at LBT

    Science.gov (United States)

    Conrad, A.; Arcidiacono, C.; Bertero, M.; Boccacci, P.; Davies, A. G.; Defrere, D.; de Kleer, K.; De Pater, I.; Hinz, P.; Hofmann, K. H.; La Camera, A.; Leisenring, J.; Kürster, M.; Rathbun, J. A.; Schertl, D.; Skemer, A.; Skrutskie, M.; Spencer, J. R.; Veillet, C.; Weigelt, G.; Woodward, C. E.

    2015-12-01

    High angular resolution from ground-based observatories stands as a key technology for advancing planetary science. In the window between the angular resolution achievable with 8-10 meter class telescopes, and the 23-to-40 meter giants of the future, LBT provides a glimpse of what the next generation of instruments providing higher angular resolution will provide. We present first ever resolved images of an Io eruption site taken from the ground, images of Io's Loki Patera taken with Fizeau imaging at the 22.8 meter LBT [Conrad, et al., AJ, 2015]. We will also present preliminary analysis of two data sets acquired during the 2015 opposition: L-band fringes at Kurdalagon and an occultation of Loki and Pele by Europa (see figure). The light curves from this occultation will yield an order of magnitude improvement in spatial resolution along the path of ingress and egress. We will conclude by providing an overview of the overall benefit of recent and future advances in angular resolution for planetary science.

  10. Resolution capacity of geophysical monitoring regarding permafrost degradation induced by hydrological processes

    Science.gov (United States)

    Mewes, Benjamin; Hilbich, Christin; Delaloye, Reynald; Hauck, Christian

    2017-12-01

    Geophysical methods are often used to characterize and monitor the subsurface composition of permafrost. The resolution capacity of standard methods, i.e. electrical resistivity tomography and refraction seismic tomography, depends not only on static parameters such as measurement geometry, but also on the temporal variability in the contrast of the geophysical target variables (electrical resistivity and P-wave velocity). Our study analyses the resolution capacity of electrical resistivity tomography and refraction seismic tomography for typical processes in the context of permafrost degradation using synthetic and field data sets of mountain permafrost terrain. In addition, we tested the resolution capacity of a petrophysically based quantitative combination of both methods, the so-called 4-phase model, and through this analysed the expected changes in water and ice content upon permafrost thaw. The results from the synthetic data experiments suggest a higher sensitivity regarding an increase in water content compared to a decrease in ice content. A potentially larger uncertainty originates from the individual geophysical methods than from the combined evaluation with the 4-phase model. In the latter, a loss of ground ice can be detected quite reliably, whereas artefacts occur in the case of increased horizontal or vertical water flow. Analysis of field data from a well-investigated rock glacier in the Swiss Alps successfully visualized the seasonal ice loss in summer and the complex spatially variable ice, water and air content changes in an interannual comparison.

  11. Resolution capacity of geophysical monitoring regarding permafrost degradation induced by hydrological processes

    Directory of Open Access Journals (Sweden)

    B. Mewes

    2017-12-01

    Full Text Available Geophysical methods are often used to characterize and monitor the subsurface composition of permafrost. The resolution capacity of standard methods, i.e. electrical resistivity tomography and refraction seismic tomography, depends not only on static parameters such as measurement geometry, but also on the temporal variability in the contrast of the geophysical target variables (electrical resistivity and P-wave velocity. Our study analyses the resolution capacity of electrical resistivity tomography and refraction seismic tomography for typical processes in the context of permafrost degradation using synthetic and field data sets of mountain permafrost terrain. In addition, we tested the resolution capacity of a petrophysically based quantitative combination of both methods, the so-called 4-phase model, and through this analysed the expected changes in water and ice content upon permafrost thaw. The results from the synthetic data experiments suggest a higher sensitivity regarding an increase in water content compared to a decrease in ice content. A potentially larger uncertainty originates from the individual geophysical methods than from the combined evaluation with the 4-phase model. In the latter, a loss of ground ice can be detected quite reliably, whereas artefacts occur in the case of increased horizontal or vertical water flow. Analysis of field data from a well-investigated rock glacier in the Swiss Alps successfully visualized the seasonal ice loss in summer and the complex spatially variable ice, water and air content changes in an interannual comparison.

  12. How does male–male competition generate negative frequency-dependent selection and disruptive selection during speciation?

    Science.gov (United States)

    Border, Shana E

    2018-01-01

    Abstract Natural selection has been shown to drive population differentiation and speciation. The role of sexual selection in this process is controversial; however, most of the work has centered on mate choice while the role of male–male competition in speciation is relatively understudied. Here, we outline how male–male competition can be a source of diversifying selection on male competitive phenotypes, and how this can contribute to the evolution of reproductive isolation. We highlight how negative frequency-dependent selection (advantage of rare phenotype arising from stronger male–male competition between similar male phenotypes compared with dissimilar male phenotypes) and disruptive selection (advantage of extreme phenotypes) drives the evolution of diversity in competitive traits such as weapon size, nuptial coloration, or aggressiveness. We underscore that male–male competition interacts with other life-history functions and that variable male competitive phenotypes may represent alternative adaptive options. In addition to competition for mates, aggressive interference competition for ecological resources can exert selection on competitor signals. We call for a better integration of male–male competition with ecological interference competition since both can influence the process of speciation via comparable but distinct mechanisms. Altogether, we present a more comprehensive framework for studying the role of male–male competition in speciation, and emphasize the need for better integration of insights gained from other fields studying the evolutionary, behavioral, and physiological consequences of agonistic interactions. PMID:29492042

  13. High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance.

    Science.gov (United States)

    McGugin, Rankin Williams; Gatenby, J Christopher; Gore, John C; Gauthier, Isabel

    2012-10-16

    The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177-1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670-674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm(2) on the right and 50 mm(2) on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region.

  14. High-resolution molybdenum K-edge X-ray absorption spectroscopy analyzed with time-dependent density functional theory.

    Science.gov (United States)

    Lima, Frederico A; Bjornsson, Ragnar; Weyhermüller, Thomas; Chandrasekaran, Perumalreddy; Glatzel, Pieter; Neese, Frank; DeBeer, Serena

    2013-12-28

    X-ray absorption spectroscopy (XAS) is a widely used experimental technique capable of selectively probing the local structure around an absorbing atomic species in molecules and materials. When applied to heavy elements, however, the quantitative interpretation can be challenging due to the intrinsic spectral broadening arising from the decrease in the core-hole lifetime. In this work we have used high-energy resolution fluorescence detected XAS (HERFD-XAS) to investigate a series of molybdenum complexes. The sharper spectral features obtained by HERFD-XAS measurements enable a clear assignment of the features present in the pre-edge region. Time-dependent density functional theory (TDDFT) has been previously shown to predict K-pre-edge XAS spectra of first row transition metal compounds with a reasonable degree of accuracy. Here we extend this approach to molybdenum K-edge HERFD-XAS and present the necessary calibration. Modern pure and hybrid functionals are utilized and relativistic effects are accounted for using either the Zeroth Order Regular Approximation (ZORA) or the second order Douglas-Kroll-Hess (DKH2) scalar relativistic approximations. We have found that both the predicted energies and intensities are in excellent agreement with experiment, independent of the functional used. The model chosen to account for relativistic effects also has little impact on the calculated spectra. This study provides an important calibration set for future applications of molybdenum HERFD-XAS to complex catalytic systems.

  15. Resolution limits for wave equation imaging

    KAUST Repository

    Huang, Yunsong

    2014-08-01

    Formulas are derived for the resolution limits of migration-data kernels associated with diving waves, primary reflections, diffractions, and multiple reflections. They are applicable to images formed by reverse time migration (RTM), least squares migration (LSM), and full waveform inversion (FWI), and suggest a multiscale approach to iterative FWI based on multiscale physics. That is, at the early stages of the inversion, events that only generate low-wavenumber resolution should be emphasized relative to the high-wavenumber resolution events. As the iterations proceed, the higher-resolution events should be emphasized. The formulas also suggest that inverting multiples can provide some low- and intermediate-wavenumber components of the velocity model not available in the primaries. Finally, diffractions can provide twice or better the resolution than specular reflections for comparable depths of the reflector and diffractor. The width of the diffraction-transmission wavepath is approximately λ at the diffractor location for the diffraction-transmission wavepath. © 2014 Elsevier B.V.

  16. Does the Data Resolution/origin Matter? Satellite, Airborne and Uav Imagery to Tackle Plant Invasions

    Science.gov (United States)

    Müllerová, Jana; Brůna, Josef; Dvořák, Petr; Bartaloš, Tomáš; Vítková, Michaela

    2016-06-01

    Invasive plant species represent a serious threat to biodiversity and landscape as well as human health and socio-economy. To successfully fight plant invasions, new methods enabling fast and efficient monitoring, such as remote sensing, are needed. In an ongoing project, optical remote sensing (RS) data of different origin (satellite, aerial and UAV), spectral (panchromatic, multispectral and color), spatial (very high to medium) and temporal resolution, and various technical approaches (object-, pixelbased and combined) are tested to choose the best strategies for monitoring of four invasive plant species (giant hogweed, black locust, tree of heaven and exotic knotweeds). In our study, we address trade-offs between spectral, spatial and temporal resolutions required for balance between the precision of detection and economic feasibility. For the best results, it is necessary to choose best combination of spatial and spectral resolution and phenological stage of the plant in focus. For species forming distinct inflorescences such as giant hogweed iterative semi-automated object-oriented approach was successfully applied even for low spectral resolution data (if pixel size was sufficient) whereas for lower spatial resolution satellite imagery or less distinct species with complicated architecture such as knotweed, combination of pixel and object based approaches was used. High accuracies achieved for very high resolution data indicate the possible application of described methodology for monitoring invasions and their long-term dynamics elsewhere, making management measures comparably precise, fast and efficient. This knowledge serves as a basis for prediction, monitoring and prioritization of management targets.

  17. High energy resolution and first time-dependent positron annihilation induced Auger electron spectroscopty

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Jakob

    2010-04-03

    It was the aim of this thesis to improve the existing positron annihilation induced Auger spectrometer at the highly intense positron source NEPOMUC (NEutron induced POsitron source MUniCh) in several ways: Firstly, the measurement time for a single spectrum should be reduced from typically 12 h to roughly 1 h or even less. Secondly, the energy resolution, which amounted to {delta}E/E{approx}10%, should be increased by at least one order of magnitude in order to make high resolution positron annihilation induced Auger spectroscopy (PAES)-measurements of Auger transitions possible and thus deliver more information about the nature of the Auger process. In order to achieve these objectives, the PAES spectrometer was equipped with a new electron energy analyzer. For its ideal operation all other components of the Auger analysis chamber had to be adapted. Particularly the sample manipulation and the positron beam guidance had to be renewed. Simulations with SIMION {sup registered} ensured the optimal positron lens parameters. After the adjustment of the new analyzer and its components, first measurements illustrated the improved performance of the PAES setup: Firstly, the measurement time for short overview measurements was reduced from 3 h to 420 s. The measurement time for more detailed Auger spectra was shortened from 12 h to 80 min. Secondly, even with the reduced measurement time, the signal to noise ratio was also enhanced by one order of magnitude. Finally, the energy resolution was improved to {delta}E/E < 1. The exceptional surface sensitivity and elemental selectivity of PAES was demonstrated in measurements of Pd and Fe, both coated with Cu layers of varying thickness. PAES showed that with 0.96 monolayer of Cu on Fe, more than 55% of the detected Auger electrons stem from Cu. In the case of the Cu coated Pd sample 0.96 monolayer of Cu resulted in a Cu Auger fraction of more than 30% with PAES and less than 5% with electron induced Auger spectroscopy

  18. Inferring gene dependency network specific to phenotypic alteration based on gene expression data and clinical information of breast cancer.

    Science.gov (United States)

    Zhou, Xionghui; Liu, Juan

    2014-01-01

    Although many methods have been proposed to reconstruct gene regulatory network, most of them, when applied in the sample-based data, can not reveal the gene regulatory relations underlying the phenotypic change (e.g. normal versus cancer). In this paper, we adopt phenotype as a variable when constructing the gene regulatory network, while former researches either neglected it or only used it to select the differentially expressed genes as the inputs to construct the gene regulatory network. To be specific, we integrate phenotype information with gene expression data to identify the gene dependency pairs by using the method of conditional mutual information. A gene dependency pair (A,B) means that the influence of gene A on the phenotype depends on gene B. All identified gene dependency pairs constitute a directed network underlying the phenotype, namely gene dependency network. By this way, we have constructed gene dependency network of breast cancer from gene expression data along with two different phenotype states (metastasis and non-metastasis). Moreover, we have found the network scale free, indicating that its hub genes with high out-degrees may play critical roles in the network. After functional investigation, these hub genes are found to be biologically significant and specially related to breast cancer, which suggests that our gene dependency network is meaningful. The validity has also been justified by literature investigation. From the network, we have selected 43 discriminative hubs as signature to build the classification model for distinguishing the distant metastasis risks of breast cancer patients, and the result outperforms those classification models with published signatures. In conclusion, we have proposed a promising way to construct the gene regulatory network by using sample-based data, which has been shown to be effective and accurate in uncovering the hidden mechanism of the biological process and identifying the gene signature for

  19. Quantitative, high-resolution proteomics for data-driven systems biology

    DEFF Research Database (Denmark)

    Cox, J.; Mann, M.

    2011-01-01

    Systems biology requires comprehensive data at all molecular levels. Mass spectrometry (MS)-based proteomics has emerged as a powerful and universal method for the global measurement of proteins. In the most widespread format, it uses liquid chromatography (LC) coupled to high-resolution tandem...... primary structure of proteins including posttranslational modifications, to localize proteins to organelles, and to determine protein interactions. Here, we describe the principles of analysis and the areas of biology where proteomics can make unique contributions. The large-scale nature of proteomics...... data and its high accuracy pose special opportunities as well as challenges in systems biology that have been largely untapped so far....

  20. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  1. Wavelet Filter Banks for Super-Resolution SAR Imaging

    Science.gov (United States)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  2. Analysis of Time Resolution in HGCAL Testbeam

    CERN Document Server

    Steentoft, Jonas

    2017-01-01

    Using data from a 250 GeV electron run during the November 2016 HGCAL testbeam, the time resolution of the High Granularity hadronic endcap Calorimeter, HGCAL, was investigated, looking at the seven innermost Si cells, and using them as reference timers for each other. Cuts in the data was applied based on signal amplitude,$0.05 \\hspace{1mm} V < A < 0.45 \\hspace{1mm} V$, position of incoming beam particle,$0 \\hspace{1mm} mm < TDCx < 22\\hspace{1mm} mm$ and $-7\\hspace{1mm} mm data, with the Photek as reference.\\\\ Gaussian functions were fitted to the corrected $\\Delta t$ distributions, and a time resolution of $15-50$ $ps$ was obtained, depending on which two cells were compared, and how the low-statistics cut were placed. We also confirmed a slight correlation between time resolution and distanc...

  3. Meta-Statistics for Variable Selection: The R Package BioMark

    Directory of Open Access Journals (Sweden)

    Ron Wehrens

    2012-11-01

    Full Text Available Biomarker identification is an ever more important topic in the life sciences. With the advent of measurement methodologies based on microarrays and mass spectrometry, thousands of variables are routinely being measured on complex biological samples. Often, the question is what makes two groups of samples different. Classical hypothesis testing suffers from the multiple testing problem; however, correcting for this often leads to a lack of power. In addition, choosing α cutoff levels remains somewhat arbitrary. Also in a regression context, a model depending on few but relevant variables will be more accurate and precise, and easier to interpret biologically.We propose an R package, BioMark, implementing two meta-statistics for variable selection. The first, higher criticism, presents a data-dependent selection threshold for significance, instead of a cookbook value of α = 0.05. It is applicable in all cases where two groups are compared. The second, stability selection, is more general, and can also be applied in a regression context. This approach uses repeated subsampling of the data in order to assess the variability of the model coefficients and selects those that remain consistently important. It is shown using experimental spike-in data from the field of metabolomics that both approaches work well with real data. BioMark also contains functionality for simulating data with specific characteristics for algorithm development and testing.

  4. The Impact of the Processing Batch Length in GNSS Data Analysis on the Estimates of Earth Rotation Parameters with Daily and Subdaily Time Resolution

    Science.gov (United States)

    Meindl, M.; Dach, R.; Thaller, D.; Schaer, S.; Beutler, G.; Jaeggi, A.

    2012-04-01

    Microwave observations from GNSS are traditionally analyzed in the post-processing mode using (solar) daily data batches. The 24-hour session length differs by only about four minutes from two revolution periods of a GPS satellite (corresponding to one sidereal day). The deep 2:1 resonance of the GPS revolution period with the length of the sidereal day may cause systematic effects in parameter estimates and spurious periodic signals in the resulting parameter time series. The selection of other (than daily) session lengths may help to identify systematic effects and to study their impact on GNSS-derived products. Such investigations are of great interest in a combined multi-GNSS analysis because of substantial differences in the satellites' revolution periods. Three years (2008-2010) of data from a global network of about 90 combined GPS/GLONASS receivers have been analyzed. Four different session lengths were used, namely the traditional 24 hours (UTC), two revolutions of a GLONASS satellite (16/17 sidereal days), two revolutions of a GPS satellite (one sidereal day), and a session length of 18/17 sidereal days, which does not correspond to either two GPS or two GLONASS revolution periods. GPS-only, GLONASS-only, and GPS/GLONASS-combined solution are established for each of the session lengths. Special care was taken to keep the GPS and GLONASS solutions fully consistent and comparable in particular where the station selection is concerned. We generate ERPs with a subdaily time resolution of about 1.4 hours (1/17 sidereal day). Using the session-specific normal equation systems (NEQs) containing the Earth rotation parameters with the 1.4 hours time resolution we derive in addition ERPs with a (sidereal) daily resolution. Note that this step requires the combination of the daily NEQs and a subsequent re-binning of 17 consecutive ERPs with 1/17 day time resolution into one (sidereal) daily parameter. These tests will reveal the impact of the session length on ERP

  5. Relationship between stacking process and resolution; Jugo shori to bunkaino ni kansuru kiso kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Matsushima, J; Rokugawa, S; Kato, Y [Geological Survey of Japan, Tsukuba (Japan); Yokota, T; Miyazaki, T [The University of Tokyo, Tokyo (Japan). Faculty of Engineering

    1996-10-01

    This paper evaluates influences of stacking of incident angles against the reflecting surface on the resolution. Basic equations for evaluating the influences were deduced. A simple evaluation method has been provided using these equations. The present evaluation method is considered to be useful for acquisition design, processing, and interpretation of data as an indicator. According to the equations introduced in this study, there are some demerits for stacking traces whose incident angles were large. A total reflection region often appears due to the decreased resolution, and the vertical resolution decreases prior to stacking. Occasionally, it is not effective to remove traces having large incident angles from the viewpoint of resolution. In practice, the selection of most suitable trace through trial and error is not easy due to difference in individual regions. An evaluation method must be discussed, by which the optimal trace can be selected automatically during the data processing. 6 refs., 15 figs.

  6. A privacy-preserving solution for compressed storage and selective retrieval of genomic data.

    Science.gov (United States)

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre

    2016-12-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.

  7. Dependence and withdrawal reactions to benzodiazepines and selective serotonin reuptake inhibitors. How did the health authorities react?

    DEFF Research Database (Denmark)

    Nielsen, Margrethe; Hansen, Ebba Holme; Gøtzsche, Peter C

    2013-01-01

    Our objective was to explore communications from drug agencies about benzodiazepine dependence and selective serotonin reuptake inhibitors (SSRIs) withdrawal reactions over time.......Our objective was to explore communications from drug agencies about benzodiazepine dependence and selective serotonin reuptake inhibitors (SSRIs) withdrawal reactions over time....

  8. High-resolution retinal swept source optical coherence tomography with an ultra-wideband Fourier-domain mode-locked laser at MHz A-scan rates.

    Science.gov (United States)

    Kolb, Jan Philip; Pfeiffer, Tom; Eibl, Matthias; Hakert, Hubertus; Huber, Robert

    2018-01-01

    We present a new 1060 nm Fourier domain mode locked laser (FDML laser) with a record 143 nm sweep bandwidth at 2∙ 417 kHz  =  834 kHz and 120 nm at 1.67 MHz, respectively. We show that not only the bandwidth alone, but also the shape of the spectrum is critical for the resulting axial resolution, because of the specific wavelength-dependent absorption of the vitreous. The theoretical limit of our setup lies at 5.9 µm axial resolution. In vivo MHz-OCT imaging of human retina is performed and the image quality is compared to the previous results acquired with 70 nm sweep range, as well as to existing spectral domain OCT data with 2.1 µm axial resolution from literature. We identify benefits of the higher resolution, for example the improved visualization of small blood vessels in the retina besides several others.

  9. 13: Data dependencies in a three-dimensional treatment planning system

    International Nuclear Information System (INIS)

    Kijewski, P.

    1987-01-01

    The design of a three-dimensional treatment planning system demands very careful attention to the problem of data dependencies among the very large and complex data sets on which such systems operate. Assurance of data consistency and data currency among dependent data requires specialized database support. For the implementation presented, an object-oriented data management system is used. Data dependencies are explicitly processed by including links between output data and source data (antecedents), links between source data and output data (descendents), and historical records of updates (versions). Using these components, a system for assuring data consistency and data currency is constructed. 4 refs.; 4 figs

  10. Energy-dependent inversion of p+16O scattering data

    International Nuclear Information System (INIS)

    Cooper, S.G.

    1997-01-01

    A fast iterative procedure is developed to determine potentials by inversion from elastic cross section, analysing powers and reaction cross-section measurements covering a wide energy range. The procedure incorporates both energy and parity dependence. The method is applied to extensive p+ 16 O scattering data for an energy range from 27.3 to 46.1 MeV, giving a solution which simultaneously reproduces the data at all energies. The wide angle data is well reproduced by including parity dependence and a linear energy dependence is established for the real potential, including the parity-dependent component. The real terms agree qualitatively with potentials derived from the single channel RGM, but the central and spin-orbit imaginary components have distinct features strongly suggestive of further non-local contributions, possibly arising from channel coupling. The large data set is found essential to reduce the potential ambiguities present when fitting scattering data. (orig.)

  11. Impact of the spatial resolution of climatic data and soil physical properties on regional corn yield predictions using the STICS crop model

    Science.gov (United States)

    Jégo, Guillaume; Pattey, Elizabeth; Mesbah, S. Morteza; Liu, Jiangui; Duchesne, Isabelle

    2015-09-01

    The assimilation of Earth observation (EO) data into crop models has proven to be an efficient way to improve yield prediction at a regional scale by estimating key unknown crop management practices. However, the efficiency of prediction depends on the uncertainty associated with the data provided to crop models, particularly climatic data and soil physical properties. In this study, the performance of the STICS (Simulateur mulTIdisciplinaire pour les Cultures Standard) crop model for predicting corn yield after assimilation of leaf area index derived from EO data was evaluated under different scenarios. The scenarios were designed to examine the impact of using fine-resolution soil physical properties, as well as the impact of using climatic data from either one or four weather stations across the region of interest. The results indicate that when only one weather station was used, the average annual yield by producer was predicted well (absolute error <5%), but the spatial variability lacked accuracy (root mean square error = 1.3 t ha-1). The model root mean square error for yield prediction was highly correlated with the distance between the weather stations and the fields, for distances smaller than 10 km, and reached 0.5 t ha-1 for a 5-km distance when fine-resolution soil properties were used. When four weather stations were used, no significant improvement in model performance was observed. This was because of a marginal decrease (30%) in the average distance between fields and weather stations (from 10 to 7 km). However, the yield predictions were improved by approximately 15% with fine-resolution soil properties regardless of the number of weather stations used. The impact of the uncertainty associated with the EO-derived soil textures and the impact of alterations in rainfall distribution were also evaluated. A variation of about 10% in any of the soil physical textures resulted in a change in dry yield of 0.4 t ha-1. Changes in rainfall distribution

  12. The long-term evolution of multilocus traits under frequency-dependent disruptive selection

    NARCIS (Netherlands)

    Van Doorn, G. Sander; Dieckmann, Ulf

    Frequency-dependent disruptive selection is widely recognized as an important source of genetic variation. Its evolutionary consequences have been extensively studied using phenotypic evolutionary models, based on quantitative genetics, game theory, or adaptive dynamics. However, the genetic

  13. EMODnet High Resolution Seabed Mapping - further developing a high resolution digital bathymetry for European seas

    Science.gov (United States)

    Schaap, D.; Schmitt, T.

    2017-12-01

    Access to marine data is a key issue for the EU Marine Strategy Framework Directive and the EU Marine Knowledge 2020 agenda and includes the European Marine Observation and Data Network (EMODnet) initiative. EMODnet aims at assembling European marine data, data products and metadata from diverse sources in a uniform way. The EMODnet Bathymetry project has developed Digital Terrain Models (DTM) for the European seas. These have been produced from survey and aggregated data sets that are indexed with metadata by adopting the SeaDataNet Catalogue services. SeaDataNet is a network of major oceanographic data centres around the European seas that manage, operate and further develop a pan-European infrastructure for marine and ocean data management. The latest EMODnet Bathymetry DTM release has a grid resolution of 1/8 arcminute and covers all European sea regions. Use has been made of circa 7800 gathered survey datasets and composite DTMs. Catalogues and the EMODnet DTM are published at the dedicated EMODnet Bathymetry portal including a versatile DTM viewing and downloading service. End December 2016 the Bathymetry project has been succeeded by EMODnet High Resolution Seabed Mapping (HRSM). This continues gathering of bathymetric in-situ data sets with extra efforts for near coastal waters and coastal zones. In addition Satellite Derived Bathymetry data are included to fill gaps in coverage of the coastal zones. The extra data and composite DTMs will increase the coverage of the European seas and its coastlines, and provide input for producing an EMODnet DTM with a common resolution of 1/16 arc minutes. The Bathymetry Viewing and Download service will be upgraded to provide a multi-resolution map and including 3D viewing. The higher resolution DTMs will also be used to determine best-estimates of the European coastline for a range of tidal levels (HAT, MHW, MSL, Chart Datum, LAT), thereby making use of a tidal model for Europe. Extra challenges will be `moving to the

  14. Platform dependencies in bottom-up hydrogen/deuterium exchange mass spectrometry.

    Science.gov (United States)

    Burns, Kyle M; Rey, Martial; Baker, Charles A H; Schriemer, David C

    2013-02-01

    Hydrogen-deuterium exchange mass spectrometry is an important method for protein structure-function analysis. The bottom-up approach uses protein digestion to localize deuteration to higher resolution, and the essential measurement involves centroid mass determinations on a very large set of peptides. In the course of evaluating systems for various projects, we established two (HDX-MS) platforms that consisted of a FT-MS and a high-resolution QTOF mass spectrometer, each with matched front-end fluidic systems. Digests of proteins spanning a 20-110 kDa range were deuterated to equilibrium, and figures-of-merit for a typical bottom-up (HDX-MS) experiment were compared for each platform. The Orbitrap Velos identified 64% more peptides than the 5600 QTOF, with a 42% overlap between the two systems, independent of protein size. Precision in deuterium measurements using the Orbitrap marginally exceeded that of the QTOF, depending on the Orbitrap resolution setting. However, the unique nature of FT-MS data generates situations where deuteration measurements can be inaccurate, because of destructive interference arising from mismatches in elemental mass defects. This is shown through the analysis of the peptides common to both platforms, where deuteration values can be as low as 35% of the expected values, depending on FT-MS resolution, peptide length and charge state. These findings are supported by simulations of Orbitrap transients, and highlight that caution should be exercised in deriving centroid mass values from FT transients that do not support baseline separation of the full isotopic composition.

  15. Normalization of energy-dependent gamma survey data.

    Science.gov (United States)

    Whicker, Randy; Chambers, Douglas

    2015-05-01

    Instruments and methods for normalization of energy-dependent gamma radiation survey data to a less energy-dependent basis of measurement are evaluated based on relevant field data collected at 15 different sites across the western United States along with a site in Mongolia. Normalization performance is assessed relative to measurements with a high-pressure ionization chamber (HPIC) due to its "flat" energy response and accurate measurement of the true exposure rate from both cosmic and terrestrial radiation. While analytically ideal for normalization applications, cost and practicality disadvantages have increased demand for alternatives to the HPIC. Regression analysis on paired measurements between energy-dependent sodium iodide (NaI) scintillation detectors (5-cm by 5-cm crystal dimensions) and the HPIC revealed highly consistent relationships among sites not previously impacted by radiological contamination (natural sites). A resulting generalized data normalization factor based on the average sensitivity of NaI detectors to naturally occurring terrestrial radiation (0.56 nGy hHPIC per nGy hNaI), combined with the calculated site-specific estimate of cosmic radiation, produced reasonably accurate predictions of HPIC readings at natural sites. Normalization against two to potential alternative instruments (a tissue-equivalent plastic scintillator and energy-compensated NaI detector) did not perform better than the sensitivity adjustment approach at natural sites. Each approach produced unreliable estimates of HPIC readings at radiologically impacted sites, though normalization against the plastic scintillator or energy-compensated NaI detector can address incompatibilities between different energy-dependent instruments with respect to estimation of soil radionuclide levels. The appropriate data normalization method depends on the nature of the site, expected duration of the project, survey objectives, and considerations of cost and practicality.

  16. Coastal and tidal landform detection from high resolution topobathymetric LiDAR data

    DEFF Research Database (Denmark)

    Andersen, Mikkel S.; Al-Hamdani, Zyad K.; Steinbacher, Frank

    -resolution mapping of these land-water transition zones. We have carried out topobathymetric LiDAR surveys in the Knudedyb tidal inlet system, a coastal environment in the Danish Wadden Sea which is part of the Wadden Sea National Park and UNESCO World Heritage. Detailed digital elevation models (DEMs) with a grid...... to tides. Furthermore, we demonstrate the potential of morphometric analysis on high-resolution topobathymetric LiDAR data for automatic identification, characterisation and classification of different landforms present in coastal land-water transition zones. Acknowledgements This work was funded...

  17. High Resolution Angle Resolved Photoemission Studies on Quasi-Particle Dynamics in Graphite

    Energy Technology Data Exchange (ETDEWEB)

    Leem, C.S.

    2010-06-02

    We obtained the spectral function of the graphite H point using high resolution angle resolved photoelectron spectroscopy (ARPES). The extracted width of the spectral function (inverse of the photo-hole lifetime) near the H point is approximately proportional to the energy as expected from the linearly increasing density of states (DOS) near the Fermi energy. This is well accounted by our electron-phonon coupling theory considering the peculiar electronic DOS near the Fermi level. And we also investigated the temperature dependence of the peak widths both experimentally and theoretically. The upper bound for the electron-phonon coupling parameter is 0.23, nearly the same value as previously reported at the K point. Our analysis of temperature dependent ARPES data at K shows that the energy of phonon mode of graphite has much higher energy scale than 125K which is dominant in electron-phonon coupling.

  18. High-efficient method for spectrometric data real time processing with increased resolution of a measuring channel

    International Nuclear Information System (INIS)

    Ashkinaze, S.I.; Voronov, V.A.; Nechaev, Yu.I.

    1988-01-01

    Solution of reduction problem as a mean to increase spectrometric tract resolution when it is realized using the digit-by-digit modified method and special strategy, significantly reducing the time of processing, is considered. The results presented confirm that the complex measurement tract plus microcomputer is equivalent to the use of the tract with a higher resolution, and the use of the digit-by-digit modified method permits to process spectrometric information in real time scale

  19. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  20. Study on light output and energy resolution of PbWO4 crystal

    International Nuclear Information System (INIS)

    Su Guanghui; Yue Ke; Sun Zhiyu

    2010-01-01

    The light output and energy resolution of PbWO 4 crystal are studied with different wrapping materials and methods. The Wrapping condition was optimized by analyzing the experimental data to gain higher light output and better energy resolution. A GEANT4-based package has been developed to simulate the corresponding features of PbWO 4 crystal, and the simulation results are consistent with the experimental data. (authors)

  1. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    Science.gov (United States)

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. HIGH RESOLUTION RESISTIVITY LEAK DETECTION DATA PROCESSING and EVALUATION MEHTODS and REQUIREMENTS

    International Nuclear Information System (INIS)

    SCHOFIELD JS

    2007-01-01

    This document has two purposes: (sm b ullet) Describe how data generated by High Resolution REsistivity (HRR) leak detection (LD) systems deployed during single-shell tank (SST) waste retrieval operations are processed and evaluated. (sm b ullet) Provide the basic review requirements for HRR data when Hrr is deployed as a leak detection method during SST waste retrievals

  3. Band-selective excited ultrahigh resolution PSYCHE-TOCSY: fast screening of organic molecules and complex mixtures.

    Science.gov (United States)

    Kakita, Veera Mohana Rao; Vemulapalli, Sahithya Phani Babu; Bharatam, Jagadeesh

    2016-04-01

    Precise assignments of (1) H atomic sites and establishment of their through-bond COSY or TOCSY connectivity are crucial for molecular structural characterization by using (1) H NMR spectroscopy. However, this exercise is often hampered by signal overlap, primarily because of (1) H-(1) H scalar coupling multiplets, even at typical high magnetic fields. The recent developments in homodecoupling strategies for effectively suppressing the coupling multiplets into nice singlets (pure-shift), particularly, Morris's advanced broadband pure-shift yielded by chirp excitation (PSYCHE) decoupling and ultrahigh resolution PSYCHE-TOCSY schemes, have shown new possibilities for unambiguous structural elucidation of complex organic molecules. The superior broadband PSYCHE-TOCSY exhibits enhanced performance over the earlier TOCSY methods, which however warrants prolonged experimental times due to the requirement of large number of dwell increments along the indirect dimension. Herein, we present fast and band-selective analog of the broadband PSYCHE-TOCSY, which is useful for analyzing complex organic molecules that exhibit characteristic yet crowded spectral regions. The simple pulse scheme relies on band-selective excitation (BSE) followed by PSYCHE homodecoupling in the indirect dimension. The BSE-PSYCHE-TOCSY has been exemplified for Estradiol and a complex carbohydrate mixture comprised of six constituents of closely comparable molecular weights. The experimental times are greatly reduced viz., ~20 fold for Estradiol and ~10 fold for carbohydrate mixture, with respect to the broadband PSYCHE-TOCSY. Furthermore, unlike the earlier homonuclear band-selective decoupling, the BSE-PSYCHE-decoupling provides fully decoupled pure-shift spectra for all the individual chemical sites within the excited band. The BSE-PSYCHE-TOCSY is expected to have significant potential for quick screening of complex organic molecules and mixtures at ultrahigh resolution. Copyright © 2015 John Wiley

  4. Comparative analysis of time efficiency and spatial resolution between different EIT reconstruction algorithms

    International Nuclear Information System (INIS)

    Kacarska, Marija; Loskovska, Suzana

    2002-01-01

    In this paper comparative analysis between different EIT algorithms is presented. Analysis is made for spatial and temporal resolution of obtained images by several different algorithms. Discussions consider spatial resolution dependent on data acquisition method, too. Obtained results show that conventional applied-current EIT is more powerful compared to induced-current EIT. (Author)

  5. Feature selection for high-dimensional integrated data

    KAUST Repository

    Zheng, Charles

    2012-04-26

    Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.

  6. Feature selection for high-dimensional integrated data

    KAUST Repository

    Zheng, Charles; Schwartz, Scott; Chapkin, Robert S.; Carroll, Raymond J.; Ivanov, Ivan

    2012-01-01

    Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.

  7. The spatial resolution of epidemic peaks.

    Directory of Open Access Journals (Sweden)

    Harriet L Mills

    2014-04-01

    Full Text Available The emergence of novel respiratory pathogens can challenge the capacity of key health care resources, such as intensive care units, that are constrained to serve only specific geographical populations. An ability to predict the magnitude and timing of peak incidence at the scale of a single large population would help to accurately assess the value of interventions designed to reduce that peak. However, current disease-dynamic theory does not provide a clear understanding of the relationship between: epidemic trajectories at the scale of interest (e.g. city; population mobility; and higher resolution spatial effects (e.g. transmission within small neighbourhoods. Here, we used a spatially-explicit stochastic meta-population model of arbitrary spatial resolution to determine the effect of resolution on model-derived epidemic trajectories. We simulated an influenza-like pathogen spreading across theoretical and actual population densities and varied our assumptions about mobility using Latin-Hypercube sampling. Even though, by design, cumulative attack rates were the same for all resolutions and mobilities, peak incidences were different. Clear thresholds existed for all tested populations, such that models with resolutions lower than the threshold substantially overestimated population-wide peak incidence. The effect of resolution was most important in populations which were of lower density and lower mobility. With the expectation of accurate spatial incidence datasets in the near future, our objective was to provide a framework for how to use these data correctly in a spatial meta-population model. Our results suggest that there is a fundamental spatial resolution for any pathogen-population pair. If underlying interactions between pathogens and spatially heterogeneous populations are represented at this resolution or higher, accurate predictions of peak incidence for city-scale epidemics are feasible.

  8. 13C spin relaxation measurements in RNA: Sensitivity and resolution improvement using spin-state selective correlation experiments

    International Nuclear Information System (INIS)

    Boisbouvier, Jerome; Brutscher, Bernhard; Simorre, Jean-Pierre; Marion, Dominique

    1999-01-01

    A set of new NMR pulse sequences has been designed for the measurement of 13 C relaxation rate constants in RNA and DNA bases: the spin-lattice relaxation rate constant R(C z ), the spin-spin relaxation rate constant R(C + ), and the CSA-dipolar cross-correlated relaxation rate constant Γ C,CH xy . The use of spin-state selective correlation techniques provides increased sensitivity and spectral resolution. Sensitivity optimised C-C filters are included in the pulse schemes for the suppression of signals originating from undesired carbon isotopomers. The experiments are applied to a 15% 13 C-labelled 33-mer RNA-theophylline complex. The measured R(C + )/Γ C,CH xy ratios indicate that 13 C CSA tensors do not vary significantly for the same type of carbon (C 2 , C 6 , C 8 ), but that they differ from one type to another. In addition, conformational exchange effects in the RNA bases are detected as a change in the relaxation decay of the narrow 13 C doublet component when varying the spacing of a CPMG pulse train. This new approach allows the detection of small exchange effects with a higher precision compared to conventional techniques

  9. Schedulability analysis for systems with data and control dependencies

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2000-01-01

    Is this paper we present an approach to schedulability analysis for hard real-time systems with control and data dependencies. We consider distributed architectures consisting of multiple programmable processors, and the scheduling policy is based on a static priority preemptive strategy! Our model...... of the system captures bath data and control dependencies, and the schedulability approach is able to reduce the pessimism of the analysis by using the knowledge about control ann data dependencies. Extensive experiments as well as a real life example demonstrate the efficiency of our approach....

  10. Resolution of carotid stenosis pre-carotid intervention: A case for selective preoperative duplex ultrasound.

    Science.gov (United States)

    Ali, Abid; Ashrafi, Mohammed; Zeynali, Iraj

    2015-01-01

    Spontaneous resolution of carotid stenosis is a phenomenon that has been described in literature in the past. At present it is not routine practise to scan patients prior to carotid endarterectomy surgery within the UK. A 58 year old female presented to hospital with a history of sudden onset headache and left sided weakness. CT head showed findings in keeping with an acute right MCA territory infarct. A duplex ultrasound scan showed echolucent material in the right internal carotid artery forming a greater than 95% stenosis. The scan was unable to visualise the patency of the vessel distally due to the position of the mandible. The patient was provisionally listed for carotid endarterectomy. An MRA was requested prior to surgery to assess the patency of the distal internal carotid artery. The MRA of the carotids showed normal appearance of the common carotid, internal and vertebral arteries with no definite stenosis. A repeat duplex ultrasound confirmed there was no significant stenosis. The finding of complete resolution of stenosis on MRA was an unexpected event. Had the initial duplex imaging allowed visualisation of the distal vessel patency, our patient would have undergone unnecessary carotid surgery with the associated morbidity and mortality. This case report draws attention to the benefits of selective preoperative scanning, in sparing patients from unnecessary surgery as a result of finding occlusion or resolution of a previously diagnosed carotid stenosis. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Spatially dynamic recurrent information flow across long-range dorsal motor network encodes selective motor goals.

    Science.gov (United States)

    Yoo, Peter E; Hagan, Maureen A; John, Sam E; Opie, Nicholas L; Ordidge, Roger J; O'Brien, Terence J; Oxley, Thomas J; Moffat, Bradford A; Wong, Yan T

    2018-03-08

    Performing voluntary movements involves many regions of the brain, but it is unknown how they work together to plan and execute specific movements. We recorded high-resolution ultra-high-field blood-oxygen-level-dependent signal during a cued ankle-dorsiflexion task. The spatiotemporal dynamics and the patterns of task-relevant information flow across the dorsal motor network were investigated. We show that task-relevant information appears and decays earlier in the higher order areas of the dorsal motor network then in the primary motor cortex. Furthermore, the results show that task-relevant information is encoded in general initially, and then selective goals are subsequently encoded in specifics subregions across the network. Importantly, the patterns of recurrent information flow across the network vary across different subregions depending on the goal. Recurrent information flow was observed across all higher order areas of the dorsal motor network in the subregions encoding for the current goal. In contrast, only the top-down information flow from the supplementary motor cortex to the frontoparietal regions, with weakened recurrent information flow between the frontoparietal regions and bottom-up information flow from the frontoparietal regions to the supplementary cortex were observed in the subregions encoding for the opposing goal. We conclude that selective motor goal encoding and execution rely on goal-dependent differences in subregional recurrent information flow patterns across the long-range dorsal motor network areas that exhibit graded functional specialization. © 2018 Wiley Periodicals, Inc.

  12. Evaluation of TRMM Multi-satellite Precipitation Analysis (TMPA performance in the Central Andes region and its dependency on spatial and temporal resolution

    Directory of Open Access Journals (Sweden)

    M. L. M. Scheel

    2011-08-01

    Full Text Available Climate time series are of major importance for base line studies for climate change impact and adaptation projects. However, for instance, in mountain regions and in developing countries there exist significant gaps in ground based climate records in space and time. Specifically, in the Peruvian Andes spatially and temporally coherent precipitation information is a prerequisite for ongoing climate change adaptation projects in the fields of water resources, disasters and food security. The present work aims at evaluating the ability of Tropical Rainfall Measurement Mission (TRMM Multi-satellite Precipitation Analysis (TMPA to estimate precipitation rates at daily 0.25° × 0.25° scale in the Central Andes and the dependency of the estimate performance on changing spatial and temporal resolution. Comparison of the TMPA product with gauge measurements in the regions of Cuzco, Peru and La Paz, Bolivia were carried out and analysed statistically. Large biases are identified in both investigation areas in the estimation of daily precipitation amounts. The occurrence of strong precipitation events was well assessed, but their intensities were underestimated. TMPA estimates for La Paz show high false alarm ratio.

    The dependency of the TMPA estimate quality with changing resolution was analysed by comparisons of 1-, 7-, 15- and 30-day sums for Cuzco, Peru. The correlation of TMPA estimates with ground data increases strongly and almost linearly with temporal aggregation. The spatial aggregation to 0.5°, 0.75° and 1° grid box averaged precipitation and its comparison to gauge data of the same areas revealed no significant change in correlation coefficients and estimate performance.

    In order to profit from the TMPA combination product on a daily basis, a procedure to blend it with daily precipitation gauge measurements is proposed.

    Different sources of errors and uncertainties introduced by the sensors, sensor

  13. Mechanisms Controlling Hypoxia Data Atlas: High-resolution hydrographic and chemical observations from 2003-2014

    Science.gov (United States)

    Zimmerle, H.; DiMarco, S. F.

    2016-02-01

    The Mechanisms Controlling Hypoxia (MCH) project consisted of 31 cruises from 2003-2014 with an objective to investigate the physical and biogeochemical processes that control the hypoxic zone on the Texas-Louisiana shelf in the northern Gulf of Mexico. The known seasonal low oxygen conditions in this region are the result of river-derived nutrients, freshwater input, and wind. The MCH Data Atlas showcases in situ data and subsequent products produced during the duration of the project, focusing on oceanographic observations from 2010-2014. The Atlas features 230 high-resolution vertical sections from nine cruises using the Acrobat undulating towed vehicle that contained a CTD along with sensors measuring oxygen, fluorescence, and turbidity. Vertical profiles along the 20-meter isobaths section feature temperature, salinity, chlorophyll, and dissolved oxygen from the Acrobat towfish and CTD rosette as well as separate selected profiles from the CTD. Surface planview maps show the horizontal distribution of temperature, salinity, chlorophyll, beam transmission, and CDOM observed by the shipboard flow-through system. Bottom planview maps present the horizontal distribution of dissolved oxygen as well as temperature and salinity from the CTD rosette and Acrobat towfish along the shelf's seafloor. Informational basemaps display the GPS cruise track as well as individual CTD stations for each cruise. The shelf concentrations of CTD rosette bottle nutrients, including nitrate, nitrite, phosphate, ammonia, and silicate are displayed in select plots. Shipboard ADCP current velocity fields are also represented. MCH datasets and additional products are featured as an electronic version to compliment the published atlas. The MCH Data Atlas provides a showcase for the spatial and temporal variability of the environmental parameters associated with the annual hypoxic event and will be a useful tool in the continued monitoring and assessment of Gulf coastal hypoxia.

  14. A USER-DEPENDENT PERFECT-SCHEDULING MULTIPLE ACCESS PROTOCOL FOR VOICE-DATA INTEGRATION IN WIRELESS NETWORKDS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A novel Multiple Access Control(MAC) protocol-User-dependent Perfect-scheduling Multiple Access(UPMA) protocol,which supports joint transmission of voice and data packets,is proposed.By this protocol,the bandwidth can be allocated dynamically to the uplink and downlink traffic with on-demand assignment and the transmission of Mobile Terminals(MTs) can be perfectly scheduled by means of polling.Meanwhile.a unique frame stucture is designed to guarantee Quality of Service(QoS) in voice traffic supporting.An effective colision resolution algorthm is also proposed to guarantee rapid channel access for activated MTs.Finally,performance of UPMA protocol is evaluated by simulation and compared with MPRMA protocol.Simulation results show that UPMA protocol has better performance.

  15. A USER-DEPENDENT PERFECT-SCHEDULING MULTIPLE ACCESS PROTOCOL FOR VOICE-DATA INTEGRATION IN WIRELESS NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Zhou Yajian; Li Jiandong; Liu Kai

    2002-01-01

    A novel Multiple Access Control (MAC) protocol - User-dependent Perfect-scheduling Multiple Access (UPMA) protocol, which supports joint transmission of voice and data packets,is proposed. By this protocol, the bandwidth can be allocated dynamically to the uplink and downlink traffic with on-demand assignment and the transmission of Mobile Terminals (MTs)can be perfectly scheduled by means of polling. Meanwhile, a unique frame structure is designed to guarantee Quality of Service (QoS) in voice traffic supporting. An effective collision resolution algorithm is also proposed to guarantee rapid channel access for activated MTs. Finally, performance of UPMA protocol is evaluated by simulation and compared with MPRMA protocol.Simulation results show that UPMA protocol has better performance.

  16. Correcting atmospheric effects on InSAR with MERIS water vapour data and elevation-dependent interpolation model

    KAUST Repository

    Li, Z. W.; Xu, Wenbin; Feng, G. C.; Hu, J.; Wang, C. C.; Ding, X. L.; Zhu, J. J.

    2012-01-01

    The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.

  17. Correcting atmospheric effects on InSAR with MERIS water vapour data and elevation-dependent interpolation model

    KAUST Repository

    Li, Z. W.

    2012-05-01

    The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.

  18. Parametric fitting of data obtained from detectors with finite resolution and limited acceptance

    International Nuclear Information System (INIS)

    Gagunashvili, N.D.

    2011-01-01

    A goodness-of-fit test for fitting of a parametric model to data obtained from a detector with finite resolution and limited acceptance is proposed. The parameters of the model are found by minimization of a statistic that is used for comparing experimental data and simulated reconstructed data. Numerical examples are presented to illustrate and validate the fitting procedure.

  19. Content dependent selection of image enhancement parameters for mobile displays

    Science.gov (United States)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  20. An evaluation of SEBAL algorithm using high resolution aircraft data acquired during BEAREX07

    Science.gov (United States)

    Paul, G.; Gowda, P. H.; Prasad, V. P.; Howell, T. A.; Staggenborg, S.

    2010-12-01

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade SEBAL has been tested over various regions and has found its application in solving water resources and irrigation problems. This research combines high resolution remote sensing data and field measurements of the surface radiation and agro-meteorological variables to review various SEBAL steps for mapping ET in the Texas High Plains (THP). High resolution aircraft images (0.5-1.8 m) acquired during the Bushland Evapotranspiration and Agricultural Remote Sensing Experiment 2007 (BEAREX07) conducted at the USDA-ARS Conservation and Production Research Laboratory in Bushland, Texas, was utilized to evaluate the SEBAL. Accuracy of individual relationships and predicted ET were investigated using observed hourly ET rates from 4 large weighing lysimeters, each located at the center of 4.7 ha field. The uniqueness and the strength of this study come from the fact that it evaluates the SEBAL for irrigated and dryland conditions simultaneously with each lysimeter field planted to irrigated forage sorghum, irrigated forage corn, dryland clumped grain sorghum, and dryland row sorghum. Improved coefficients for the local conditions were developed for the computation of roughness length for momentum transport. The decision involved in selection of dry and wet pixels, which essentially determines the partitioning of the available energy between sensible (H) and latent (LE) heat fluxes has been discussed. The difference in roughness length referred to as the kB-1 parameter was modified in the current study. Performance of the SEBAL was evaluated using mean bias error (MBE) and root mean square error (RMSE). An RMSE of ±37.68 W m-2 and ±0.11 mm h-1 was observed for the net radiation and hourly actual ET, respectively

  1. Mapping irrigated areas of Ghana using fusion of 30 m and 250 m resolution remote-sensing data

    Science.gov (United States)

    Gumma, M.K.; Thenkabail, P.S.; Hideto, F.; Nelson, A.; Dheeravath, V.; Busia, D.; Rala, A.

    2011-01-01

    Maps of irrigated areas are essential for Ghana's agricultural development. The goal of this research was to map irrigated agricultural areas and explain methods and protocols using remote sensing. Landsat Enhanced Thematic Mapper (ETM+) data and time-series Moderate Resolution Imaging Spectroradiometer (MODIS) data were used to map irrigated agricultural areas as well as other land use/land cover (LULC) classes, for Ghana. Temporal variations in the normalized difference vegetation index (NDVI) pattern obtained in the LULC class were used to identify irrigated and non-irrigated areas. First, the temporal variations in NDVI pattern were found to be more consistent in long-duration irrigated crops than with short-duration rainfed crops due to more assured water supply for irrigated areas. Second, surface water availability for irrigated areas is dependent on shallow dug-wells (on river banks) and dug-outs (in river bottoms) that affect the timing of crop sowing and growth stages, which was in turn reflected in the seasonal NDVI pattern. A decision tree approach using Landsat 30 m one time data fusion with MODIS 250 m time-series data was adopted to classify, group, and label classes. Finally, classes were tested and verified using ground truth data and national statistics. Fuzzy classification accuracy assessment for the irrigated classes varied between 67 and 93%. An irrigated area derived from remote sensing (32,421 ha) was 20-57% higher than irrigated areas reported by Ghana's Irrigation Development Authority (GIDA). This was because of the uncertainties involved in factors such as: (a) absence of shallow irrigated area statistics in GIDA statistics, (b) non-clarity in the irrigated areas in its use, under-development, and potential for development in GIDA statistics, (c) errors of omissions and commissions in the remote sensing approach, and (d) comparison involving widely varying data types, methods, and approaches used in determining irrigated area statistics

  2. On the Averaging of Cardiac Diffusion Tensor MRI Data: The Effect of Distance Function Selection

    Science.gov (United States)

    Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.

    2016-01-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) Metrics were judged by quantitative –rather than qualitative– criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the “swelling effect” occurrence following Euclidean averaging was found to be too unimportant to be worth consideration. PMID:27754986

  3. On the averaging of cardiac diffusion tensor MRI data: the effect of distance function selection

    Science.gov (United States)

    Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.

    2016-11-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) metrics were judged by quantitative—rather than qualitative—criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the ‘swelling effect’ occurrence following Euclidean averaging was found to be too unimportant to be worth consideration.

  4. Linear transforms for Fourier data on the sphere: application to high angular resolution diffusion MRI of the brain.

    Science.gov (United States)

    Haldar, Justin P; Leahy, Richard M

    2013-05-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  6. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    Science.gov (United States)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  7. Selective area growth of GaN rod structures by MOVPE: Dependence on growth conditions

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shunfeng; Fuendling, Soenke; Wang, Xue; Erenburg, Milena; Al-Suleiman, Mohamed Aid Mansur; Wei, Jiandong; Wehmann, Hergo-Heinrich; Waag, Andreas [Institut fuer Halbleitertechnik, TU Braunschweig, Hans-Sommer-Strasse 66, 38106 Braunschweig (Germany); Bergbauer, Werner [Institut fuer Halbleitertechnik, TU Braunschweig, Hans-Sommer-Strasse 66, 38106 Braunschweig (Germany); Osram Opto Semiconductors GmbH, Leibnizstr. 4, 93055 Regensburg (Germany); Strassburg, Martin [Osram Opto Semiconductors GmbH, Leibnizstr. 4, 93055 Regensburg (Germany)

    2011-07-15

    Selective area growth of GaN nanorods by metalorganic vapor phase epitaxy is highly demanding for novel applications in nano-optoelectronic and nanophotonics. Recently, we report the successful selective area growth of GaN nanorods in a continuous-flow mode. In this work, as examples, we show the morphology dependence of GaN rods with {mu}m or sub-{mu}m in diameters on growth conditions. Firstly, we found that the nitridation time is critical for the growth, with an optimum from 90 to 180 seconds. This leads to more homogeneous N-polar GaN rods growth. A higher temperature during GaN rod growth tends to increase the aspect ratio of the GaN rods. This is due to the enhanced surface diffusion of growth species. The V/III ratio is also an important parameter for the GaN rod growth. Its increase causes reduction of the aspect ratio of GaN rods, which could be explained by the relatively lower growth rate on (000-1) N-polar top surface than it on {l_brace}1-100{r_brace} m-planes by supplying more NH{sub 3} (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  8. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    Science.gov (United States)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak

  9. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    Science.gov (United States)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  10. Negative frequency-dependent selection between Pasteuria penetrans and its host Meloidogyne arenaria

    Science.gov (United States)

    In negative frequency-dependant selection (NFDS), parasite genotypes capable of infecting the numerically dominant host genotype are favored, while host genotypes resistant to the dominant parasite genotype are favored, creating a cyclical pattern of resistant genotypes in the host population and, a...

  11. Genomic scans for selective sweeps using SNP data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Williamson, Scott; Kim, Yuseob

    2005-01-01

    of the selection coefficient. To illustrate the method, we apply our approach to data from the Seattle SNP project and to Chromosome 2 data from the HapMap project. In Chromosome 2, the most extreme signal is found in the lactase gene, which previously has been shown to be undergoing positive selection. Evidence...

  12. Higher spins tunneling from a time dependent and spherically symmetric black hole

    International Nuclear Information System (INIS)

    Siahaan, Haryanto M.

    2016-01-01

    The discussions of Hawking radiation via tunneling method have been performed extensively in the case of scalar particles. Moreover, there are also several works in discussing the tunneling method for Hawking radiation by using higher spins, e.g. neutrino, photon, and gravitino, in the background of static black holes. Interestingly, it is found that the Hawking temperature for static black holes using the higher spins particles has no difference compared to the one computed using scalars. In this paper, we study the Hawking radiation for a spherically symmetric and time dependent black holes using the tunneling of Dirac particles, photon, and gravitino. We find that the obtained Hawking temperature is similar to the one derived in the tunneling method by using scalars. (orig.)

  13. Higher spins tunneling from a time dependent and spherically symmetric black hole

    Energy Technology Data Exchange (ETDEWEB)

    Siahaan, Haryanto M. [Parahyangan Catholic University, Physics Department, Bandung (Indonesia)

    2016-03-15

    The discussions of Hawking radiation via tunneling method have been performed extensively in the case of scalar particles. Moreover, there are also several works in discussing the tunneling method for Hawking radiation by using higher spins, e.g. neutrino, photon, and gravitino, in the background of static black holes. Interestingly, it is found that the Hawking temperature for static black holes using the higher spins particles has no difference compared to the one computed using scalars. In this paper, we study the Hawking radiation for a spherically symmetric and time dependent black holes using the tunneling of Dirac particles, photon, and gravitino. We find that the obtained Hawking temperature is similar to the one derived in the tunneling method by using scalars. (orig.)

  14. Combined interpretation of SkyTEM and high-resolution seismic data

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie; Lykke-Andersen, Holger; Jørgensen, Flemming Voldum

    2011-01-01

    made based on AEM (SkyTEM) and high-resolution seismic data from an area covering 10 km2 in the western part of Denmark. As support for the interpretations, an exploration well was drilled to provide lithological and logging information in the form of resistivity and vertical seismic profiling. Based...... on the resistivity log, synthetic SkyTEM responses were calculated with a varying number of gate-times in order to illustrate the effect of the noise-level. At the exploration well geophysical data were compared to the lithological log; in general there is good agreement. The same tendency was recognised when Sky...

  15. A high-resolution regional reanalysis for Europe

    Science.gov (United States)

    Ohlwein, C.

    2015-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  16. Dynamics of habitat selection in birds: adaptive response to nest predation depends on multiple factors.

    Science.gov (United States)

    Devries, J H; Clark, R G; Armstrong, L M

    2018-05-01

    According to theory, habitat selection by organisms should reflect underlying habitat-specific fitness consequences and, in birds, reproductive success has a strong impact on population growth in many species. Understanding processes affecting habitat selection also is critically important for guiding conservation initiatives. Northern pintails (Anas acuta) are migratory, temperate-nesting birds that breed in greatest concentrations in the prairies of North America and their population remains below conservation goals. Habitat loss and changing land use practices may have decoupled formerly reliable fitness cues with respect to nest habitat choices. We used data from 62 waterfowl nesting study sites across prairie Canada (1997-2009) to examine nest survival, a primary fitness metric, at multiple scales, in combination with estimates of habitat selection (i.e., nests versus random points), to test for evidence of adaptive habitat choices. We used the same habitat covariates in both analyses. Pintail nest survival varied with nest initiation date, nest habitat, pintail breeding pair density, landscape composition and annual moisture. Selection of nesting habitat reflected patterns in nest survival in some cases, indicating adaptive selection, but strength of habitat selection varied seasonally and depended on population density and landscape composition. Adaptive selection was most evident late in the breeding season, at low breeding densities and in cropland-dominated landscapes. Strikingly, at high breeding density, habitat choice appears to become maladaptive relative to nest predation. At larger spatial scales, the relative availability of habitats with low versus high nest survival, and changing land use practices, may limit the reproductive potential of pintails.

  17. Analysis of lipid experiments (ALEX: a software framework for analysis of high-resolution shotgun lipidomics data.

    Directory of Open Access Journals (Sweden)

    Peter Husen

    Full Text Available Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1. The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  18. Analysis of lipid experiments (ALEX): a software framework for analysis of high-resolution shotgun lipidomics data.

    Science.gov (United States)

    Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S

    2013-01-01

    Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  19. High-resolution downscaling for hydrological management

    Science.gov (United States)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  20. Parotid gland shrinkage during IMRT predicts the time to Xerostomia resolution.

    Science.gov (United States)

    Sanguineti, Giuseppe; Ricchetti, Francesco; Wu, Binbin; McNutt, Todd; Fiorino, Claudio

    2015-01-17

    To assess the impact of mid-treatment parotid gland shrinkage on long term xerostomia during IMRT for oropharyngeal SCC. All patients treated with IMRT at a single Institution from November 2007 to June 2010 and undergoing weekly CT scans were selected. Parotid glands were contoured retrospectively on the mid treatment CT scan. For each parotid gland, the percent change relative to the planning volume was calculated and combined as weighted average. Patients were considered to be xerostomic if developed GR2+ dry mouth according to CTCAE v3.0. Predictors of the time to xerostomia resolution or downgrade to 1 were investigated at both uni- and multivariate analysis. 85 patients were selected. With a median follow up of 35.8 months (range: 2.4-62.6 months), the actuarial rate of xerostomia is 26.2% (SD: 5.3%) and 15.9% (SD: 5.3%) at 2 and 3 yrs, respectively. At multivariate analysis, mid-treatment shrink along with weighted average mean parotid dose at planning and body mass index are independent predictors of the time to xerostomia resolution. Patients were pooled in 4 groups based on median values of both mid-treatment shrink (cut-off: 19.6%) and mean WA parotid pl-D (cut-off: 35.7 Gy). Patients with a higher than median parotid dose at planning and who showed poor shrinkage at mid treatment are the ones with the outcome significantly worse (3-yr rate of xerostomia ≈ 50%) than the other three subgroups (3-yr rate of xerostomia ≈ 10%). For a given planned dose, patients whose parotids significantly shrink during IMRT are less likely to be long-term supplemental fluids dependent.

  1. Mid-Season High-Resolution Satellite Imagery for Forecasting Site-Specific Corn Yield

    Directory of Open Access Journals (Sweden)

    Nahuel R. Peralta

    2016-10-01

    Full Text Available A timely and accurate crop yield forecast is crucial to make better decisions on crop management, marketing, and storage by assessing ahead and implementing based on expected crop performance. The objective of this study was to investigate the potential of high-resolution satellite imagery data collected at mid-growing season for identification of within-field variability and to forecast corn yield at different sites within a field. A test was conducted on yield monitor data and RapidEye satellite imagery obtained for 22 cornfields located in five different counties (Clay, Dickinson, Rice, Saline, and Washington of Kansas (total of 457 ha. Three basic tests were conducted on the data: (1 spatial dependence on each of the yield and vegetation indices (VIs using Moran’s I test; (2 model selection for the relationship between imagery data and actual yield using ordinary least square regression (OLS and spatial econometric (SPL models; and (3 model validation for yield forecasting purposes. Spatial autocorrelation analysis (Moran’s I test for both yield and VIs (red edge NDVI = NDVIre, normalized difference vegetation index = NDVIr, SRre = red-edge simple ratio, near infrared = NIR and green-NDVI = NDVIG was tested positive and statistically significant for most of the fields (p < 0.05, except for one. Inclusion of spatial adjustment to model improved the model fit on most fields as compared to OLS models, with the spatial adjustment coefficient significant for half of the fields studied. When selected models were used for prediction to validate dataset, a striking similarity (RMSE = 0.02 was obtained between predicted and observed yield within a field. Yield maps could assist implementing more effective site-specific management tools and could be utilized as a proxy of yield monitor data. In summary, high-resolution satellite imagery data can be reasonably used to forecast yield via utilization of models that include spatial adjustment to

  2. Serial data acquisition for the X-ray plasma diagnostics with selected GEM detector structures

    Science.gov (United States)

    Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Zabolotny, W.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zienkiewicz, P.

    2015-10-01

    The measurement system based on GEM—Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement tokamak plasmas. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. The required data processing have two steps: 1—processing in the time domain, i.e. events selections for bunches of coinciding clusters, 2—processing in the planar space domain, i.e. cluster identification for the given detector structure. So, it is the software part of the project between the electronic hardware and physics applications. The whole project is original and it was developed by the paper authors. The previous version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures for the new data acquisition system. The fast and accurate mode of data acquisition implemented in the hardware in real time can be applied for the dynamic plasma diagnostics. Several detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Final data processing is presented by histograms for selected range of position, time interval and cluster charge values. Exemplary radiation source properties are measured by the basic cumulative characteristics: the cluster position distribution and cluster charge value distribution corresponding to the energy spectra. A shorter version of this contribution is due to be published in PoS at: 1st EPS conference on Plasma Diagnostics

  3. Serial data acquisition for the X-ray plasma diagnostics with selected GEM detector structures

    International Nuclear Information System (INIS)

    Czarski, T.; Chernyshova, M.; Pozniak, K.T.; Kasprowicz, G.; Zabolotny, W.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zienkiewicz, P.

    2015-01-01

    The measurement system based on GEM—Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement tokamak plasmas. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. The required data processing have two steps: 1—processing in the time domain, i.e. events selections for bunches of coinciding clusters, 2—processing in the planar space domain, i.e. cluster identification for the given detector structure. So, it is the software part of the project between the electronic hardware and physics applications. The whole project is original and it was developed by the paper authors. The previous version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures for the new data acquisition system. The fast and accurate mode of data acquisition implemented in the hardware in real time can be applied for the dynamic plasma diagnostics. Several detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Final data processing is presented by histograms for selected range of position, time interval and cluster charge values. Exemplary radiation source properties are measured by the basic cumulative characteristics: the cluster position distribution and cluster charge value distribution corresponding to the energy spectra. A shorter version of this contribution is due to be published in PoS at: 1 st EPS conference on Plasma Diagnostics

  4. A Method Based on Intuitionistic Fuzzy Dependent Aggregation Operators for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Fen Wang

    2013-01-01

    Full Text Available Recently, resolving the decision making problem of evaluation and ranking the potential suppliers have become as a key strategic factor for business firms. In this paper, two new intuitionistic fuzzy aggregation operators are developed: dependent intuitionistic fuzzy ordered weighed averaging (DIFOWA operator and dependent intuitionistic fuzzy hybrid weighed aggregation (DIFHWA operator. Some of their main properties are studied. A method based on the DIFHWA operator for intuitionistic fuzzy multiple attribute decision making is presented. Finally, an illustrative example concerning supplier selection is given.

  5. Spin-dependent optics with metasurfaces

    Directory of Open Access Journals (Sweden)

    Xiao Shiyi

    2016-11-01

    Full Text Available Optical spin-Hall effect (OSHE is a spin-dependent transportation phenomenon of light as an analogy to its counterpart in condensed matter physics. Although being predicted and observed for decades, this effect has recently attracted enormous interests due to the development of metamaterials and metasurfaces, which can provide us tailor-made control of the light-matter interaction and spin-orbit interaction. In parallel to the developments of OSHE, metasurface gives us opportunities to manipulate OSHE in achieving a stronger response, a higher efficiency, a higher resolution, or more degrees of freedom in controlling the wave front. Here, we give an overview of the OSHE based on metasurface-enabled geometric phases in different kinds of configurational spaces and their applications on spin-dependent beam steering, focusing, holograms, structured light generation, and detection. These developments mark the beginning of a new era of spin-enabled optics for future optical components.

  6. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy.

    Science.gov (United States)

    Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R; Murshudov, Garib N; Short, Judith M; Scheres, Sjors H W; Henderson, Richard

    2013-12-01

    Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an

  7. Millisecond resolution electron fluxes from the Cluster satellites: Calibrated EDI ambient electron data

    Science.gov (United States)

    Förster, Matthias; Rashev, Mikhail; Haaland, Stein

    2017-04-01

    The Electron Drift Instrument (EDI) onboard Cluster can measure 500 eV and 1 keV electron fluxes with high time resolution during passive operation phases in its Ambient Electron (AE) mode. Data from this mode is available in the Cluster Science Archive since October 2004 with a cadence of 16 Hz in the normal mode or 128 Hz for burst mode telemetry intervals. The fluxes are recorded at pitch angles of 0, 90, and 180 degrees. This paper describes the calibration and validation of these measurements. The high resolution AE data allow precise temporal and spatial diagnostics of magnetospheric boundaries and will be used for case studies and statistical studies of low energy electron fluxes in the near-Earth space. We show examples of applications.

  8. Comparison of frequency-distance relationship and Gaussian-diffusion-based methods of compensation for distance-dependent spatial resolution in SPECT imaging

    International Nuclear Information System (INIS)

    Kohli, Vandana; King, Micgael A.; Glick, Stephen J.; Pan, Tin-Su

    1998-01-01

    The goal of this investigation was to compare resolution recovery versus noise level of two methods for compensation of distance-dependent resolution (DDR) in SPECT imaging. The two methods of compensation were restoration filtering based on the frequency-distance relationship (FDR) prior to iterative reconstruction, and modelling DDR in the projector/backprojector pair employed in iterative reconstruction. FDR restoration filtering was computationally faster than modelling the detector response in iterative reconstruction. Using Gaussian diffusion to model the detector response in iterative reconstruction sped up the process by a factor of 2.5 over frequency domain filtering in the projector/backprojector pair. Gaussian diffusion modelling resulted in a better resolution versus noise tradeoff than either FDR restoration filtering or solely modelling attenuation in the projector/backprojector pair of iterative reconstruction. For the pixel size investigated herein (0.317 cm), accounting for DDR in the projector/backprojector pair by Gaussian diffusion, or by applying a blurring function based on the distance from the face of the collimator at each distance, resulted in very similar resolution recovery and slice noise level. (author)

  9. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  10. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  11. A new processing scheme for ultra-high resolution direct infusion mass spectrometry data

    Science.gov (United States)

    Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus

    2018-04-01

    High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.

  12. Comparing of the Reaction Time in Substance-Dependent and Non-Dependent Individuals

    Directory of Open Access Journals (Sweden)

    Mohammad Narimani

    2012-11-01

    Full Text Available Aim: The aim of this study was to compare the simple, selective, and discrimination reaction time in substance-dependent and non-dependent individuals. Method: In this causal-comparative study, the population included of 425 males (opium and crystal dependents who were referred to addiction rehabilitation centers in Tabriz. By random sampling, 16 opium dependents, 16 crystal dependents, and 16 non-dependent individuals with no history of dependency as the compare group were selected. All groups peered in age, and marital status. For gathering data, “Addicts Admit Questionnaire” and laboratory device known as the "Reaction Time Assay" have been used. Results: The results of this study showed that there are significant differences among all groups in simple reaction time, choice reaction time and reaction time to auditory stimuli, but no significant difference in discrimination reaction time and reaction time to visual stimulus observed. Conclusion: The reaction time of substance-dependent groups is slower than non-dependent groups.

  13. Optimized Database of Higher Education Management Using Data Warehouse

    Directory of Open Access Journals (Sweden)

    Spits Warnars

    2010-04-01

    Full Text Available The emergence of new higher education institutions has created the competition in higher education market, and data warehouse can be used as an effective technology tools for increasing competitiveness in the higher education market. Data warehouse produce reliable reports for the institution’s high-level management in short time for faster and better decision making, not only on increasing the admission number of students, but also on the possibility to find extraordinary, unconventional funds for the institution. Efficiency comparison was based on length and amount of processed records, total processed byte, amount of processed tables, time to run query and produced record on OLTP database and data warehouse. Efficiency percentages was measured by the formula for percentage increasing and the average efficiency percentage of 461.801,04% shows that using data warehouse is more powerful and efficient rather than using OLTP database. Data warehouse was modeled based on hypercube which is created by limited high demand reports which usually used by high level management. In every table of fact and dimension fields will be inserted which represent the loading constructive merge where the ETL (Extraction, Transformation and Loading process is run based on the old and new files.

  14. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  15. Heat Transport upon River-Water Infiltration investigated by Fiber-Optic High-Resolution Temperature Profiling

    Science.gov (United States)

    Vogt, T.; Schirmer, M.; Cirpka, O. A.

    2010-12-01

    Infiltrating river water is of high relevance for drinking water supply by river bank filtration as well as for riparian groundwater ecology. Quantifying flow patterns and velocities, however, is hampered by temporal and spatial variations of exchange fluxes. In recent years, heat has become a popular natural tracer to estimate exchange rates between rivers and groundwater. Nevertheless, field investigations are often limited by insufficient sensors spacing or simplifying assumptions such as one-dimensional flow. Our interest lies in a detailed local survey of river water infiltration at a restored river section at the losing river Thur in northeast Switzerland. Here, we measured three high-resolution temperature profiles along an assumed flow path by means of distributed temperature sensing (DTS) using fiber optic cables wrapped around poles. Moreover, piezometers were equipped with standard temperature sensors for a comparison to the DTS data. Diurnal temperature oscillations were tracked in the river bed and the riparian groundwater and analyzed by means of dynamic harmonic regression and subsequent modeling of heat transport with sinusoidal boundary conditions to quantify seepage velocities and thermal diffusivities. Compared to the standard temperature sensors, the DTS data give a higher vertical resolution, facilitating the detection of process- and structure-dependent patterns of the spatiotemporal temperature field. This advantage overcompensates the scatter in the data due to instrument noise. In particular, we could demonstrate the impact of heat conduction through the unsaturated zone on the riparian groundwater by the high resolution temperature profiles.

  16. Correcting Biases in a lower resolution global circulation model with data assimilation

    Science.gov (United States)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have

  17. Ecological genetics of the Bromus tectorum (Poaceae) - Ustilago Bullata (Ustilaginaceae): A role for frequency dependent selection?

    Science.gov (United States)

    Susan E. Meyer; David L. Nelson; Suzette Clement; Alisa Ramakrishnan

    2010-01-01

    Evolutionary processes that maintain genetic diversity in plants are likely to include selection imposed by pathogens. Negative frequency-dependent selection is a mechanism for maintenance of resistance polymorphism in plant - pathogen interactions. We explored whether such selection operates in the Bromus tectorum - Ustilago bullata pathosystem. Gene-for-gene...

  18. Modeling fire behavior on tropical islands with high-resolution weather data

    Science.gov (United States)

    John W. Benoit; Francis M. Fujioka; David R. Weise

    2009-01-01

    In this study, we consider fire behavior simulation in tropical island scenarios such as Hawaii and Puerto Rico. The development of a system to provide real-time fire behavior prediction in Hawaii is discussed. This involves obtaining fuels and topography information at a fine scale, as well as supplying daily high-resolution weather forecast data for the area of...

  19. A higher level language data acquisition system (III) - the user data acquisition program

    International Nuclear Information System (INIS)

    Finn, J.M.; Gulbranson, R.L.; Huang, T.L.

    1983-01-01

    The nuclear physics group at the University of Illinois has implemented a data acquisition system using modified versions of the Concurrent Pascal and Sequential Pascal languages. The user, a physicist, develops a data acquisition ''operating system'', written in these higher level languages, which is tailored to the planned experiment. The user must include only those system functions which are essential to the task, thus improving efficiency. The user program is constructed from simple modules, mainly consisting of Concurrent Pascal PROCESSes, MONITORs, and CLASSes together with appropriate data type definitions. Entire programs can be put together using ''cut and paste'' techniques. Planned enhancements include the automating of this process. Systems written for the Perkin-Elmer 3220 using this approach can easily exceed 2 kHz data rates for event by event handling; 20 kHz data rates have been achieved by the addition of buffers in the interrupt handling software. These rates have been achieved without the use of special-purpose hardware such as micro-programmed branch drivers. With the addition of such devices even higher data rates should be possible

  20. Impact and Implementation of Higher-Order Ionospheric Effects on Precise GNSS Applications

    Science.gov (United States)

    Hadas, T.; Krypiak-Gregorczyk, A.; Hernández-Pajares, M.; Kaplon, J.; Paziewski, J.; Wielgosz, P.; Garcia-Rigo, A.; Kazmierski, K.; Sosnica, K.; Kwasniak, D.; Sierny, J.; Bosy, J.; Pucilowski, M.; Szyszko, R.; Portasiak, K.; Olivares-Pulido, G.; Gulyaeva, T.; Orus-Perez, R.

    2017-11-01

    High precision Global Navigation Satellite Systems (GNSS) positioning and time transfer require correcting signal delays, in particular higher-order ionospheric (I2+) terms. We present a consolidated model to correct second- and third-order terms, geometric bending and differential STEC bending effects in GNSS data. The model has been implemented in an online service correcting observations from submitted RINEX files for I2+ effects. We performed GNSS data processing with and without including I2+ corrections, in order to investigate the impact of I2+ corrections on GNSS products. We selected three time periods representing different ionospheric conditions. We used GPS and GLONASS observations from a global network and two regional networks in Poland and Brazil. We estimated satellite orbits, satellite clock corrections, Earth rotation parameters, troposphere delays, horizontal gradients, and receiver positions using global GNSS solution, Real-Time Kinematic (RTK), and Precise Point Positioning (PPP) techniques. The satellite-related products captured most of the impact of I2+ corrections, with the magnitude up to 2 cm for clock corrections, 1 cm for the along- and cross-track orbit components, and below 5 mm for the radial component. The impact of I2+ on troposphere products turned out to be insignificant in general. I2+ corrections had limited influence on the performance of ambiguity resolution and the reliability of RTK positioning. Finally, we found that I2+ corrections caused a systematic shift in the coordinate domain that was time- and region-dependent and reached up to -11 mm for the north component of the Brazilian stations during the most active ionospheric conditions.

  1. Quality of data used in site selection

    International Nuclear Information System (INIS)

    Delvin, W.L.

    1986-01-01

    The selection of sites for nuclear waste repositories requires an investigative effort to characterize potential sites with regard to geologic properties and environmental considerations. Such investigations generate scientific and engineering data through the experimental testing and evaluation of geologic and environmental materials and through sampling and analysis of those materials. Data generated for site selection must be correct, defendable, and suitable for their intended use; they must have quality. Five quality characteristics are defined and practices followed by scientists and engineers producing data have been grouped into seven categories called quality guides. These are presented in the paper and the relationship between the guides (practices) and the five quality characteristics is shown

  2. Mars, High-Resolution Digital Terrain Model Quadrangles on the Basis of Mars-Express HRSC Data

    Science.gov (United States)

    Dumke, A.; Spiegel, M.; van Gasselt, S.; Neu, D.; Neukum, G.

    2010-05-01

    Introduction: Since December 2003, the European Space Agency's (ESA) Mars Express (MEX) orbiter has been investigating Mars. The High Resolution Stereo Camera (HRSC), one of the scientific experiments onboard MEX, is a pushbroom stereo color scanning instrument with nine line detectors, each equipped with 5176 CCD sensor elements [1,2]. One of the goals for MEX HRSC is to cover Mars globally in color and stereoscopically at high-resolution. So far, HRSC has covered half of the surface of Mars at a resolution better than 20 meters per pixel. HRSC data allows to derive high-resolution digital terrain models (DTM), color-orthoimage mosaics and additionally higher-level 3D data products. Past work concentrated on producing regional data mosaics for areas of scientific interest in a single strip and/or bundle block adjustment and deriving DTMs [3]. The next logical step, based on substantially the same procedure, is to systematically expand the derivation of DTMs and orthoimage data to the 140 map quadrangle scheme (Q-DTM). Methods: The division of the Mars surface into 140 quadrangles is briefly described in Greeley and Batson [4] and based upon the standard MC 30 (Mars Chart) system. The quadrangles are named by alpha-numerical labels. The workflow for the determination of new orientation data for the derivation of digital terrain models takes place in two steps. First, for each HRSC orbits covering a quadrangle, new exterior orientation parameters are determined [5,6]. The successfully classified exterior orientation parameters become the input for the next step in which the exterior orientation parameters are determined together in a bundle block adjustment. Only those orbit strips which have a sufficient overlap area and a certain number of tie points can be used in a common bundle block adjustment. For the automated determination of tie points, software provided by the Leibniz Universität Hannover [7] is used. Results: For the derivation of Q-DTMs and ortho

  3. A redundancy-removing feature selection algorithm for nominal data

    Directory of Open Access Journals (Sweden)

    Zhihua Li

    2015-10-01

    Full Text Available No order correlation or similarity metric exists in nominal data, and there will always be more redundancy in a nominal dataset, which means that an efficient mutual information-based nominal-data feature selection method is relatively difficult to find. In this paper, a nominal-data feature selection method based on mutual information without data transformation, called the redundancy-removing more relevance less redundancy algorithm, is proposed. By forming several new information-related definitions and the corresponding computational methods, the proposed method can compute the information-related amount of nominal data directly. Furthermore, by creating a new evaluation function that considers both the relevance and the redundancy globally, the new feature selection method can evaluate the importance of each nominal-data feature. Although the presented feature selection method takes commonly used MIFS-like forms, it is capable of handling high-dimensional datasets without expensive computations. We perform extensive experimental comparisons of the proposed algorithm and other methods using three benchmarking nominal datasets with two different classifiers. The experimental results demonstrate the average advantage of the presented algorithm over the well-known NMIFS algorithm in terms of the feature selection and classification accuracy, which indicates that the proposed method has a promising performance.

  4. Size dependent magnetism of mass selected deposited transition metal clusters

    International Nuclear Information System (INIS)

    Lau, T.

    2002-05-01

    The size dependent magnetic properties of small iron clusters deposited on ultrathin Ni/Cu(100) films have been studied with circularly polarised synchrotron radiation. For X-ray magnetic circular dichroism studies, the magnetic moments of size selected clusters were aligned perpendicular to the sample surface. Exchange coupling of the clusters to the ultrathin Ni/Cu(100) film determines the orientation of their magnetic moments. All clusters are coupled ferromagnetically to the underlayer. With the use of sum rules, orbital and spin magnetic moments as well as their ratios have been extracted from X-ray magnetic circular dichroism spectra. The ratio of orbital to spin magnetic moments varies considerably as a function of cluster size, reflecting the dependence of magnetic properties on cluster size and geometry. These variations can be explained in terms of a strongly size dependent orbital moment. Both orbital and spin magnetic moments are significantly enhanced in small clusters as compared to bulk iron, although this effect is more pronounced for the spin moment. Magnetic properties of deposited clusters are governed by the interplay of cluster specific properties on the one hand and cluster-substrate interactions on the other hand. Size dependent variations of magnetic moments are modified upon contact with the substrate. (orig.)

  5. [Dependent relative: Effects on family health].

    Science.gov (United States)

    Estrada Fernández, M Eugenia; Gil Lacruz, Ana I; Gil Lacruz, Marta; Viñas López, Antonio

    2018-01-01

    The purpose of this work is to analyse the effects on informal caregiver's health and lifestyle when living with a dependent person at home. A comparison will be made between this situation and other situations involving commitment of time and energy, taking into account gender and age differences in each stage of the life cycle. Cross-sectional study analysing secondary data. The method used for collecting information is the computer assisted personal interview carried out in selected homes by the Ministry of Health, Social Services and Equality. The study included 19,351 participants aged over 25 years who completed the 2011-2012 Spanish National Health Survey. This research is based on demographic information obtained from a Spanish National Health Survey (2011/12). Using an empirical framework, the Logit model was select and the data reported as odds ratio. The estimations were repeated independently by sub-groups of age and gender. The study showed that the health of people who share their lives with a dependent person is worse than those who do not have any dependent person at home (they are 5 times at higher risk of developing health problems). The study found that being a woman, advance age, low educational level and does not work, also has an influence. Being a caregiver reduces the likelihood of maintaining a healthy lifestyle through physical exercise, relaxation, or eating a balanced diet. Living with a dependent person reduces the likelihood of maintaining healthy lifestyles and worsens the state of health of family members. Significant differences in gender and age were found. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  6. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    Directory of Open Access Journals (Sweden)

    Chiao-Ling Lo

    2016-08-01

    Full Text Available Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP. This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross resulted in small haplotype blocks (HB with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS, were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50% of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284 and intronic regions (169 with the least in exon's (4, suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a, excitatory receptors (Grin2a, Gria3, Grip1, neurotransmitters (Pomc, and synapses (Snap29. This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  7. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    Science.gov (United States)

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  8. Analysis of a high-resolution regional climate simulation for Alpine temperature. Validation and influence of the NAO

    Energy Technology Data Exchange (ETDEWEB)

    Proemmel, K. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Kuestenforschung

    2008-11-06

    elevations reaches -3.5 K, whereas by applying a monthly varying lapse rate based on the station data it reaches only about -1 K. The comparison of the REMO simulation and ERA40 reanalysis shows that the added value of the former varies between seasons and regions. In some regions it also depends on the selection of stations used for the validation. Robust features include a better performance of REMO in the inner Alpine subregions, where the orography is most complex. The lack of consistent value added by REMO in this hindcast setup may be partly explicable by the fact that meteorological measurements are assimilated in the ERA40 reanalysis but not in the REMO simulation. As the higher resolution leads to an added value in the simulation of temperature, at least in the most complex areas, the question is addressed whether it also leads to more detailed structures in the temperature response to circulation variability. In this study the temperature response to the North Atlantic Oscillation (NAO) with its strong influence on European winter climate is analysed over the GAR by using a very dense homogenised station dataset (HISTALP and stations from Austrian and Swiss weather services), the high-resolution simulation (for information in areas, where no station data are available) and the reanalysis. In earlier studies only a few individual stations or gridded data not higher resolved than 1 were used. The temperature signals based on the station data and based on the model data have very similar patterns and are in agreement with the European-wide pattern. The highly resolved model data show an additional clear small-scale pattern with a strong signal south of the main Alpine ridge potentially caused by the foehn effect. This small-scale structure is not visible in the reanalysis due to the coarser resolution and was also not found in previous studies based on both station and model data for the same reason. (orig.)

  9. A selective overview of feature screening for ultrahigh-dimensional data.

    Science.gov (United States)

    JingYuan, Liu; Wei, Zhong; RunZe, L I

    2015-10-01

    High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of high-dimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis of high-dimensional data. Following this general principle, a large number of variable selection approaches via penalized least squares or likelihood have been developed in the recent literature to estimate a sparse model and select significant variables simultaneously. While the penalized variable selection methods have been successfully applied in many high-dimensional analyses, modern applications in areas such as genomics and proteomics push the dimensionality of data to an even larger scale, where the dimension of data may grow exponentially with the sample size. This has been called ultrahigh-dimensional data in the literature. This work aims to present a selective overview of feature screening procedures for ultrahigh-dimensional data. We focus on insights into how to construct marginal utilities for feature screening on specific models and motivation for the need of model-free feature screening procedures.

  10. Unwrapped phase inversion for near surface seismic data

    KAUST Repository

    Choi, Yun Seok

    2012-11-04

    The Phase-wrapping is one of the main obstacles of waveform inversion. We use an inversion algorithm based on the instantaneous-traveltime that overcomes the phase-wrapping problem. With a high damping factor, the frequency-dependent instantaneous-traveltime inversion provides the stability of refraction tomography, with higher resolution results, and no arrival picking involved. We apply the instantaneous-traveltime inversion to the synthetic data generated by the elastic time-domain modeling. The synthetic data is a representative of the near surface seismic data. Although the inversion algorithm is based on the acoustic wave equation, the numerical examples show that the instantaneous-traveltime inversion generates a convergent velocity model, very similar to what we see from traveltime tomography.

  11. Design and performance of a spin-polarized electron energy loss spectrometer with high momentum resolution

    Energy Technology Data Exchange (ETDEWEB)

    Vasilyev, D.; Kirschner, J. [Max-Planck-Institut für Mikrostrukturphysik, Weinberg 2, 06120 Halle (Germany)

    2016-08-15

    We describe a new “complete” spin-polarized electron energy loss spectrometer comprising a spin-polarized primary electron source, an imaging electron analyzer, and a spin analyzer of the “spin-polarizing mirror” type. Unlike previous instruments, we have a high momentum resolution of less than 0.04 Å{sup −1}, at an energy resolution of 90-130 meV. Unlike all previous studies which reported rather broad featureless data in both energy and angle dependence, we find richly structured spectra depending sensitively on small changes of the primary energy, the kinetic energy after scattering, and of the angle of incidence. The key factor is the momentum resolution.

  12. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy

    International Nuclear Information System (INIS)

    Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R.; Murshudov, Garib N.; Short, Judith M.; Scheres, Sjors H.W.; Henderson, Richard

    2013-01-01

    Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an

  13. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R.; Murshudov, Garib N.; Short, Judith M.; Scheres, Sjors H.W.; Henderson, Richard, E-mail: rh15@mrc-lmb.cam.ac.uk

    2013-12-15

    Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an

  14. An Efficient Approach for Pixel Decomposition to Increase the Spatial Resolution of Land Surface Temperature Images from MODIS Thermal Infrared Band Data

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2014-12-01

    Full Text Available Land surface temperature (LST images retrieved from the thermal infrared (TIR band data of Moderate Resolution Imaging Spectroradiometer (MODIS have much lower spatial resolution than the MODIS visible and near-infrared (VNIR band data. The coarse pixel scale of MODIS LST images (1000 m under nadir have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD. Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI and building index (NDBI, reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER with much higher spatial resolution than MODIS data was on-board the same platform (Terra as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error

  15. An efficient approach for pixel decomposition to increase the spatial resolution of land surface temperature images from MODIS thermal infrared band data.

    Science.gov (United States)

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2014-12-25

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2

  16. Higher order net-proton number cumulants dependence on the centrality definition and other spurious effects

    Science.gov (United States)

    Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.

    2018-02-01

    We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.

  17. A Framework for Six Sigma Project Selection in Higher Educational Institutions, Using a Weighted Scorecard Approach

    Science.gov (United States)

    Holmes, Monica C.; Jenicke, Lawrence O.; Hempel, Jessica L.

    2015-01-01

    Purpose: This paper discusses the importance of the Six Sigma selection process, describes a Six Sigma project in a higher educational institution and presents a weighted scorecard approach for project selection. Design/Methodology/Approach: A case study of the Six Sigma approach being used to improve student support at a university computer help…

  18. Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction

    Science.gov (United States)

    Fang, Shiting; Wang, Huafeng; Liu, Yueliang; Zhang, Minghui; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2017-10-01

    Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior-inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%-33.4% and the edge width by 11.4%-24.3%, relative to linear interpolation, back projection (BP) and Zhang et al’s algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al’s method, thus indicating the effectivity and competitiveness of the proposed algorithm.

  19. OPAL: selection and acquisition of LEP data

    International Nuclear Information System (INIS)

    Le Du, P.

    1985-01-01

    The OPAL project (Omni Purpose aparatus for LEP) is presented. It will be a frame and an example to explain the main problems and limitations concerning the mode of event selection, acquisition and information transfer to the final registering system. A quick review of the different problems related to data selection and acquisition is made [fr

  20. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  1. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  2. Isotope specific resolution recovery image reconstruction in high resolution PET imaging.

    Science.gov (United States)

    Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib

    2014-05-01

    Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The

  3. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Event dependent sampling of recurrent events

    DEFF Research Database (Denmark)

    Kvist, Tine Kajsa; Andersen, Per Kragh; Angst, Jules

    2010-01-01

    The effect of event-dependent sampling of processes consisting of recurrent events is investigated when analyzing whether the risk of recurrence increases with event count. We study the situation where processes are selected for study if an event occurs in a certain selection interval. Motivation...... retrospective and prospective disease course histories are used. We examine two methods to correct for the selection depending on which data are used in the analysis. In the first case, the conditional distribution of the process given the pre-selection history is determined. In the second case, an inverse...

  5. Coastal habitat mapping in the Aegean Sea using high resolution orthophoto maps

    Science.gov (United States)

    Topouzelis, Konstantinos; Papakonstantinou, Apostolos; Doukari, Michaela; Stamatis, Panagiotis; Makri, Despina; Katsanevakis, Stelios

    2017-09-01

    The significance of coastal habitat mapping lies in the need to prevent from anthropogenic interventions and other factors. Until 2015, Landsat-8 (30m) imagery were used as medium spatial resolution satellite imagery. So far, Sentinel-2 satellite imagery is very useful for more detailed regional scale mapping. However, the use of high resolution orthophoto maps, which are determined from UAV data, is expected to improve the mapping accuracy. This is due to small spatial resolution of the orthophoto maps (30 cm). This paper outlines the integration of UAS for data acquisition and Structure from Motion (SfM) pipeline for the visualization of selected coastal areas in the Aegean Sea. Additionally, the produced orthophoto maps analyzed through an object-based image analysis (OBIA) and nearest-neighbor classification for mapping the coastal habitats. Classification classes included the main general habitat types, i.e. seagrass, soft bottom, and hard bottom The developed methodology applied at the Koumbara beach (Ios Island - Greece). Results showed that UAS's data revealed the sub-bottom complexity in large shallow areas since they provide such information in the spatial resolution that permits the mapping of seagrass meadows with extreme detail. The produced habitat vectors are ideal as reference data for studies with satellite data of lower spatial resolution.

  6. A High Resolution, Light-Weight, Synthetic Aperture Radar for UAV Application

    International Nuclear Information System (INIS)

    Doerry, A.W.; Hensley, W.H.; Stence, J.; Tsunoda, S.I.; Pace, F.; Walker, B.C.; Woodring, M.

    1999-06-01

    (U) Sandia National Laboratories in collaboration with General Atomics (GA) has designed and built a high resolution, light-weight, Ku-band Synthetic Aperture Radar (SAR) known as ''Lynx''. Although Lynx can be operated on a wide variety of manned and unmanned platforms, its design is optimized for use on medium altitude Unmanned Aerial Vehicles (UAVS). In particular, it can be operated on the Predator, I-GNAT, and Prowler II platforms manufactured by GA. (U) The radar production weight is less than 120 lb and operates within a 3 GHz band from 15.2 GHz to 18.2 GHz with a peak output power of 320 W. Operating range is resolution and mode dependent but can exceed 45 km in adverse weather (4 mm/hr rain). Lynx has operator selectable resolution and is capable of 0.1 m resolution in spotlight mode and 0.3 m resolution in strip map mode, over substantial depression angles (5 to 60 deg) and squint angles (broadside and ±45 deg). Real-time Motion Compensation is implemented to allow high-quality image formation even during vehicle turns and other maneuvers

  7. Clickstream data yields high-resolution maps of science.

    Science.gov (United States)

    Bollen, Johan; Van de Sompel, Herbert; Hagberg, Aric; Bettencourt, Luis; Chute, Ryan; Rodriguez, Marko A; Balakireva, Lyudmila

    2009-01-01

    Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data.

  8. Precision Viticulture from Multitemporal, Multispectral Very High Resolution Satellite Data

    Science.gov (United States)

    Kandylakis, Z.; Karantzalos, K.

    2016-06-01

    In order to exploit efficiently very high resolution satellite multispectral data for precision agriculture applications, validated methodologies should be established which link the observed reflectance spectra with certain crop/plant/fruit biophysical and biochemical quality parameters. To this end, based on concurrent satellite and field campaigns during the veraison period, satellite and in-situ data were collected, along with several grape samples, at specific locations during the harvesting period. These data were collected for a period of three years in two viticultural areas in Northern Greece. After the required data pre-processing, canopy reflectance observations, through the combination of several vegetation indices were correlated with the quantitative results from the grape/must analysis of grape sampling. Results appear quite promising, indicating that certain key quality parameters (like brix levels, total phenolic content, brix to total acidity, anthocyanin levels) which describe the oenological potential, phenolic composition and chromatic characteristics can be efficiently estimated from the satellite data.

  9. A multi-step strategy to obtain crystals of the dengue virus RNA-dependent RNA polymerase that diffract to high resolution

    International Nuclear Information System (INIS)

    Yap, Thai Leong; Chen, Yen Liang; Xu, Ting; Wen, Daying; Vasudevan, Subhash G.; Lescar, Julien

    2007-01-01

    Crystals of the RNA-dependent RNA polymerase catalytic domain from the dengue virus NS5 protein have been obtained using a strategy that included expression screening of naturally occurring serotype variants of the protein, the addition of divalent metal ions and crystal dehydration. These crystals diffract to 1.85 Å resolution and are thus suitable for a structure-based drug-design program. Dengue virus, a member of the Flaviviridae genus, causes dengue fever, an important emerging disease with several million infections occurring annually for which no effective therapy exists. The viral RNA-dependent RNA polymerase NS5 plays an important role in virus replication and represents an interesting target for the development of specific antiviral compounds. Crystals that diffract to 1.85 Å resolution that are suitable for three-dimensional structure determination and thus for a structure-based drug-design program have been obtained using a strategy that included expression screening of naturally occurring serotype variants of the protein, the addition of divalent metal ions and crystal dehydration

  10. A multi-step strategy to obtain crystals of the dengue virus RNA-dependent RNA polymerase that diffract to high resolution

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Thai Leong [Novartis Institute for Tropical Diseases, 10 Biopolis Road, Chromos Building, Singapore 138670 (Singapore); School of Biological Sciences, Nanyang Technological University, 60 Nanyang Drive, Singapore 637551 (Singapore); Chen, Yen Liang; Xu, Ting; Wen, Daying; Vasudevan, Subhash G. [Novartis Institute for Tropical Diseases, 10 Biopolis Road, Chromos Building, Singapore 138670 (Singapore); Lescar, Julien, E-mail: julien@ntu.edu.sg [Novartis Institute for Tropical Diseases, 10 Biopolis Road, Chromos Building, Singapore 138670 (Singapore); School of Biological Sciences, Nanyang Technological University, 60 Nanyang Drive, Singapore 637551 (Singapore)

    2007-02-01

    Crystals of the RNA-dependent RNA polymerase catalytic domain from the dengue virus NS5 protein have been obtained using a strategy that included expression screening of naturally occurring serotype variants of the protein, the addition of divalent metal ions and crystal dehydration. These crystals diffract to 1.85 Å resolution and are thus suitable for a structure-based drug-design program. Dengue virus, a member of the Flaviviridae genus, causes dengue fever, an important emerging disease with several million infections occurring annually for which no effective therapy exists. The viral RNA-dependent RNA polymerase NS5 plays an important role in virus replication and represents an interesting target for the development of specific antiviral compounds. Crystals that diffract to 1.85 Å resolution that are suitable for three-dimensional structure determination and thus for a structure-based drug-design program have been obtained using a strategy that included expression screening of naturally occurring serotype variants of the protein, the addition of divalent metal ions and crystal dehydration.

  11. Is a 4-bit synaptic weight resolution enough? - constraints on enabling spike-timing dependent plasticity in neuromorphic hardware.

    Science.gov (United States)

    Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2012-01-01

    Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.

  12. The scale-dependent impact of wolf predation risk on resource selection by three sympatric ungulates.

    Science.gov (United States)

    Kittle, Andrew M; Fryxell, John M; Desy, Glenn E; Hamr, Joe

    2008-08-01

    Resource selection is a fundamental ecological process impacting population dynamics and ecosystem structure. Understanding which factors drive selection is vital for effective species- and landscape-level management. We used resource selection probability functions (RSPFs) to study the influence of two forms of wolf (Canis lupus) predation risk, snow conditions and habitat variables on white-tailed deer (Odocoileus virginianus), elk (Cervus elaphus) and moose (Alces alces) resource selection in central Ontario's mixed forest French River-Burwash ecosystem. Direct predation risk was defined as the frequency of a predator's occurrence across the landscape and indirect predation risk as landscape features associated with a higher risk of predation. Models were developed for two winters, each at two spatial scales, using a combination of GIS-derived and ground-measured data. Ungulate presence was determined from snow track transects in 64 16- and 128 1-km(2) resource units, and direct predation risk from GPS radio collar locations of four adjacent wolf packs. Ungulates did not select resources based on the avoidance of areas of direct predation risk at any scale, and instead exhibited selection patterns that tradeoff predation risk minimization with forage and/or mobility requirements. Elk did not avoid indirect predation risk, while both deer and moose exhibited inconsistent responses to this risk. Direct predation risk was more important to models than indirect predation risk but overall, abiotic topographical factors were most influential. These results indicate that wolf predation risk does not limit ungulate habitat use at the scales investigated and that responses to spatial sources of predation risk are complex, incorporating a variety of anti-predator behaviours. Moose resource selection was influenced less by snow conditions than cover type, particularly selection for dense forest, whereas deer showed the opposite pattern. Temporal and spatial scale

  13. Joint Variable Selection and Classification with Immunohistochemical Data

    Directory of Open Access Journals (Sweden)

    Debashis Ghosh

    2009-01-01

    Full Text Available To determine if candidate cancer biomarkers have utility in a clinical setting, validation using immunohistochemical methods is typically done. Most analyses of such data have not incorporated the multivariate nature of the staining profiles. In this article, we consider modelling such data using recently developed ideas from the machine learning community. In particular, we consider the joint goals of feature selection and classification. We develop estimation procedures for the analysis of immunohistochemical profiles using the least absolute selection and shrinkage operator. These lead to novel and flexible models and algorithms for the analysis of compositional data. The techniques are illustrated using data from a cancer biomarker study.

  14. Analysis of Interactive Conflict Resolution Tool Usage in a Mixed Equipage Environment

    Science.gov (United States)

    Homola, Jeffrey; Morey, Susan; Cabrall, Christopher; Martin, Lynne; Mercer, Joey; Prevot, Thomas

    2013-01-01

    A human-in-the-loop simulation was conducted that examined separation assurance concepts in varying levels of traffic density with mixtures of aircraft equipage and automation. This paper's analysis focuses on one of the experimental conditions in which traffic levels were approximately fifty percent higher than today, and approximately fifty percent of the traffic within the test area were equipped with data communications (data comm) capabilities. The other fifty percent of the aircraft required control by voice much like today. Within this environment, the air traffic controller participants were provided access to tools and automation designed to support the primary task of separation assurance that are currently unavailable. Two tools were selected for analysis in this paper: 1) a pre-probed altitude fly-out menu that provided instant feedback of conflict probe results for a range of altitudes, and 2) an interactive auto resolver that provided on-demand access to an automation-generated conflict resolution trajectory. Although encouraged, use of the support tools was not required; the participants were free to use the tools as they saw fit, and they were also free to accept, reject, or modify the resolutions offered by the automation. This mode of interaction provided a unique opportunity to examine exactly when and how these tools were used, as well as how acceptable the resolutions were. Results showed that the participants used the pre-probed altitude fly-out menu in 14% of conflict cases and preferred to use it in a strategic timeframe on data comm equipped and level flight aircraft. The interactive auto resolver was also used in a primarily strategic timeframe on 22% of conflicts and that their preference was to use it on conflicts involving data comm equipped aircraft as well. Of the 258 resolutions displayed, 46% were implemented and 54% were not. The auto resolver was rated highly by participants in terms of confidence and preference. Factors such as

  15. A multi-resolution envelope-power based model for speech intelligibility

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Ewert, Stephan D.; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well...... to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments...... with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv...

  16. Feature Selection of Network Intrusion Data using Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Iwan Syarif

    2016-12-01

    Full Text Available This paper describes the advantages of using Evolutionary Algorithms (EA for feature selection on network intrusion dataset. Most current Network Intrusion Detection Systems (NIDS are unable to detect intrusions in real time because of high dimensional data produced during daily operation. Extracting knowledge from huge data such as intrusion data requires new approach. The more complex the datasets, the higher computation time and the harder they are to be interpreted and analyzed. This paper investigates the performance of feature selection algoritms in network intrusiona data. We used Genetic Algorithms (GA and Particle Swarm Optimizations (PSO as feature selection algorithms. When applied to network intrusion datasets, both GA and PSO have significantly reduces the number of features. Our experiments show that GA successfully reduces the number of attributes from 41 to 15 while PSO reduces the number of attributes from 41 to 9. Using k Nearest Neighbour (k-NN as a classifier,the GA-reduced dataset which consists of 37% of original attributes, has accuracy improvement from 99.28% to 99.70% and its execution time is also 4.8 faster than the execution time of original dataset. Using the same classifier, PSO-reduced dataset which consists of 22% of original attributes, has the fastest execution time (7.2 times faster than the execution time of original datasets. However, its accuracy is slightly reduced 0.02% from 99.28% to 99.26%. Overall, both GA and PSO are good solution as feature selection techniques because theyhave shown very good performance in reducing the number of features significantly while still maintaining and sometimes improving the classification accuracy as well as reducing the computation time.

  17. Genomic prediction in early selection stages using multi-year data in a hybrid rye breeding program.

    Science.gov (United States)

    Bernal-Vasquez, Angela-Maria; Gordillo, Andres; Schmidt, Malthe; Piepho, Hans-Peter

    2017-05-31

    The use of multiple genetic backgrounds across years is appealing for genomic prediction (GP) because past years' data provide valuable information on marker effects. Nonetheless, single-year GP models are less complex and computationally less demanding than multi-year GP models. In devising a suitable analysis strategy for multi-year data, we may exploit the fact that even if there is no replication of genotypes across years, there is plenty of replication at the level of marker loci. Our principal aim was to evaluate different GP approaches to simultaneously model genotype-by-year (GY) effects and breeding values using multi-year data in terms of predictive ability. The models were evaluated under different scenarios reflecting common practice in plant breeding programs, such as different degrees of relatedness between training and validation sets, and using a selected fraction of genotypes in the training set. We used empirical grain yield data of a rye hybrid breeding program. A detailed description of the prediction approaches highlighting the use of kinship for modeling GY is presented. Using the kinship to model GY was advantageous in particular for datasets disconnected across years. On average, predictive abilities were 5% higher for models using kinship to model GY over models without kinship. We confirmed that using data from multiple selection stages provides valuable GY information and helps increasing predictive ability. This increase is on average 30% higher when the predicted genotypes are closely related with the genotypes in the training set. A selection of top-yielding genotypes together with the use of kinship to model GY improves the predictive ability in datasets composed of single years of several selection cycles. Our results clearly demonstrate that the use of multi-year data and appropriate modeling is beneficial for GP because it allows dissecting GY effects from genomic estimated breeding values. The model choice, as well as ensuring

  18. Detection and identification of drugs and toxicants in human body fluids by liquid chromatography-tandem mass spectrometry under data-dependent acquisition control and automated database search.

    Science.gov (United States)

    Oberacher, Herbert; Schubert, Birthe; Libiseller, Kathrin; Schweissgut, Anna

    2013-04-03

    Systematic toxicological analysis (STA) is aimed at detecting and identifying all substances of toxicological relevance (i.e. drugs, drugs of abuse, poisons and/or their metabolites) in biological material. Particularly, gas chromatography-mass spectrometry (GC/MS) represents a competent and commonly applied screening and confirmation tool. Herein, we present an untargeted liquid chromatography-tandem mass spectrometry (LC/MS/MS) assay aimed to complement existing GC/MS screening for the detection and identification of drugs in blood, plasma and urine samples. Solid-phase extraction was accomplished on mixed-mode cartridges. LC was based on gradient elution in a miniaturized C18 column. High resolution electrospray ionization-MS/MS in positive ion mode with data-dependent acquisition control was used to generate tandem mass spectral information that enabled compound identification via automated library search in the "Wiley Registry of Tandem Mass Spectral Data, MSforID". Fitness of the developed LC/MS/MS method for application in STA in terms of selectivity, detection capability and reliability of identification (sensitivity/specificity) was demonstrated with blank samples, certified reference materials, proficiency test samples, and authentic casework samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Global land cover mapping at 30 m resolution: A POK-based operational approach

    Science.gov (United States)

    Chen, Jun; Chen, Jin; Liao, Anping; Cao, Xin; Chen, Lijun; Chen, Xuehong; He, Chaoying; Han, Gang; Peng, Shu; Lu, Miao; Zhang, Weiwei; Tong, Xiaohua; Mills, Jon

    2015-05-01

    Global Land Cover (GLC) information is fundamental for environmental change studies, land resource management, sustainable development, and many other societal benefits. Although GLC data exists at spatial resolutions of 300 m and 1000 m, a 30 m resolution mapping approach is now a feasible option for the next generation of GLC products. Since most significant human impacts on the land system can be captured at this scale, a number of researchers are focusing on such products. This paper reports the operational approach used in such a project, which aims to deliver reliable data products. Over 10,000 Landsat-like satellite images are required to cover the entire Earth at 30 m resolution. To derive a GLC map from such a large volume of data necessitates the development of effective, efficient, economic and operational approaches. Automated approaches usually provide higher efficiency and thus more economic solutions, yet existing automated classification has been deemed ineffective because of the low classification accuracy achievable (typically below 65%) at global scale at 30 m resolution. As a result, an approach based on the integration of pixel- and object-based methods with knowledge (POK-based) has been developed. To handle the classification process of 10 land cover types, a split-and-merge strategy was employed, i.e. firstly each class identified in a prioritized sequence and then results are merged together. For the identification of each class, a robust integration of pixel-and object-based classification was developed. To improve the quality of the classification results, a knowledge-based interactive verification procedure was developed with the support of web service technology. The performance of the POK-based approach was tested using eight selected areas with differing landscapes from five different continents. An overall classification accuracy of over 80% was achieved. This indicates that the developed POK-based approach is effective and feasible

  20. Dose-dependent EEG effects of zolpidem provide evidence for GABA(A) receptor subtype selectivity in vivo.

    Science.gov (United States)

    Visser, S A G; Wolters, F L C; van der Graaf, P H; Peletier, L A; Danhof, M

    2003-03-01

    Zolpidem is a nonbenzodiazepine GABA(A) receptor modulator that binds in vitro with high affinity to GABA(A) receptors expressing alpha(1) subunits but with relatively low affinity to receptors expressing alpha(2), alpha(3), and alpha(5) subunits. In the present study, it was investigated whether this subtype selectivity could be detected and quantified in vivo. Three doses (1.25, 5, and 25 mg) of zolpidem were administered to rats in an intravenous infusion over 5 min. The time course of the plasma concentrations was determined in conjunction with the change in the beta-frequency range of the EEG as pharmacodynamic endpoint. The concentration-effect relationship of the three doses showed a dose-dependent maximum effect and a dose-dependent potency. The data were analyzed for one- or two-site binding using two pharmacodynamic models based on 1) the descriptive model and 2) a novel mechanism-based pharmacokinetic/pharmacodynamic (PK/PD) model for GABA(A) receptor modulators that aims to separates drug- and system-specific properties, thereby allowing the estimation of in vivo affinity and efficacy. The application of two-site models significantly improved the fits compared with one-site models. Furthermore, in contrast to the descriptive model, the mechanism-based PK/PD model yielded dose-independent estimates for affinity (97 +/- 40 and 33,100 +/- 14,800 ng x ml(-1)). In conclusion, the mechanism-based PK/PD model is able to describe and explain the observed dose-dependent EEG effects of zolpidem and suggests the subtype selectivity of zolpidem in vivo.

  1. The high-resolution regional reanalysis COSMO-REA6

    Science.gov (United States)

    Ohlwein, C.

    2016-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  2. SU-E-T-96: Energy Dependence of the New GafChromic- EBT3 Film's Dose Response-Curve.

    Science.gov (United States)

    Chiu-Tsao, S; Massillon-Jl, G; Domingo-Muñoz, I; Chan, M

    2012-06-01

    To study and compare the dose response curves of the new GafChromic EBT3 film for megavoltage and kilovoltage x-ray beams, with different spatial resolution. Two sets of EBT3 films (lot#A101711-02) were exposed to each x-ray beam (6MV, 15MV and 50kV) at 8 dose values (50-3200cGy). The megavoltage beams were calibrated per AAPM TG-51 protocol while the kilovoltage beam was calibrated following the TG-61 using an ionization chamber calibrated at NIST. Each film piece was scanned three consecutive times in the center of Epson 10000XL flatbed scanner in transmission mode, landscape orientation, 48-bit color at two separate spatial resolutions of 75 and 300 dpi. The data were analyzed using ImageJ and, for each scanned image, a region of interest (ROI) of 2×2cm 2 at the field center was selected to obtain the mean pixel value with its standard deviation in the ROI. For each energy, dose value and spatial resolution, the average netOD and its associated uncertainty were determined. The Student's t-test was performed to evaluate the statistical differences between the netOD/dose values of the three energy modalities, with different color channels and spatial resolutions. The dose response curves for the three energy modalities were compared in three color channels with 75 and 300dpi. Weak energy dependence was found. For doses above 100cGy, no statistical differences were observed between 6 and 15MV beams, regardless of spatial resolution. However, statistical differences were observed between 50kV and the megavoltage beams. The degree of energy dependence (from MV to 50kV) was found to be function of color channel, dose level and spatial resolution. The dose response curves for GafChromic EBT3 films were found to be weakly dependent on the energy of the photon beams from 6MV to 50kV. The degree of energy dependence varies with color channel, dose and spatial resolution. GafChromic EBT3 films were supplied by Ashland Corp. This work was partially supported by DGAPA

  3. Data acquisition system for high resolution chopper spectrometer (HRC) at J-PARC

    International Nuclear Information System (INIS)

    Yano, Shin-ichiro; Itoh, Shinichi; Satoh, Setsuo; Yokoo, Tetsuya; Kawana, Daichi; Sato, Taku J.

    2011-01-01

    We installed the data acquisition (DAQ) system on the High Resolution Chopper Spectrometer (HRC) at beamline BL12 at the Materials and Life Science Experimental Facility (MLF) of the Japan Proton Accelerator Research Complex (J-PARC). In inelastic neutron scattering experiments with the HRC, the event data of the detected neutrons are processed in the DAQ system and visualized in the form of the dynamic structure factor. We confirmed that the data analysis process works well by visualizing excitations in single-crystal magnetic systems probed by inelastic neutron scattering.

  4. Experimental determination of spin-dependent electron density by joint refinement of X-ray and polarized neutron diffraction data.

    Science.gov (United States)

    Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed

    2012-11-01

    New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.

  5. Fast resolution of the neutron diffusion equation through public domain Ode codes

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, V.M.; Vidal, V.; Garayoa, J. [Universidad Politecnica de Valencia, Departamento de Sistemas Informaticos, Valencia (Spain); Verdu, G. [Universidad Politecnica de Valencia, Departamento de Ingenieria Quimica y Nuclear, Valencia (Spain); Gomez, R. [I.E.S. de Tavernes Blanques, Valencia (Spain)

    2003-07-01

    The time-dependent neutron diffusion equation is a partial differential equation with source terms. The resolution method usually includes discretizing the spatial domain, obtaining a large system of linear, stiff ordinary differential equations (ODEs), whose resolution is computationally very expensive. Some standard techniques use a fixed time step to solve the ODE system. This can result in errors (if the time step is too large) or in long computing times (if the time step is too little). To speed up the resolution method, two well-known public domain codes have been selected: DASPK and FCVODE that are powerful codes for the resolution of large systems of stiff ODEs. These codes can estimate the error after each time step, and, depending on this estimation can decide which is the new time step and, possibly, which is the integration method to be used in the next step. With these mechanisms, it is possible to keep the overall error below the chosen tolerances, and, when the system behaves smoothly, to take large time steps increasing the execution speed. In this paper we address the use of the public domain codes DASPK and FCVODE for the resolution of the time-dependent neutron diffusion equation. The efficiency of these codes depends largely on the preconditioning of the big systems of linear equations that must be solved. Several pre-conditioners have been programmed and tested; it was found that the multigrid method is the best of the pre-conditioners tested. Also, it has been found that DASPK has performed better than FCVODE, being more robust for our problem.We can conclude that the use of specialized codes for solving large systems of ODEs can reduce drastically the computational work needed for the solution; and combining them with appropriate pre-conditioners, the reduction can be still more important. It has other crucial advantages, since it allows the user to specify the allowed error, which cannot be done in fixed step implementations; this, of course

  6. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  7. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  8. Resolution Enhancement of Scanning Laser Acoustic Microscope Using Transverse Wave

    International Nuclear Information System (INIS)

    Ko, D. S.; Park, J. S.; Kim, Y. H.

    1997-01-01

    We studied the resolution enhancement of a novel scanning laser acoustic microscope (SLAM) using transverse waves. Mode conversion of the ultrasonic wave takes place at the liquid-solid interface and some energy of the insonifying longitudinal waves in the water will convert to transverse wave energy within the solid specimen. The resolution of SLAM depends on the size of detecting laser spot and the wavelength of the insonifying ultrasonic waves. Science the wavelength of the transverse wave is shorter than that of the longitudinal wave, we are able to achieve the high resolution by using transverse waves. In order to operate SLAM in the transverse wave mode, we made wedge for changing the incident angle. Our experimental results with model 2140 SLAM and an aluminum specimen showed higher contrast of the SLAM image in the transverse wave mode than that in the longitudinal wave mode

  9. The Soil Moisture Dependence of TRMM Microwave Imager Rainfall Estimates

    Science.gov (United States)

    Seyyedi, H.; Anagnostou, E. N.

    2011-12-01

    This study presents an in-depth analysis of the dependence of overland rainfall estimates from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) on the soil moisture conditions at the land surface. TMI retrievals are verified against rainfall fields derived from a high resolution rain-gauge network (MESONET) covering Oklahoma. Soil moisture (SOM) patterns are extracted based on recorded data from 2000-2007 with 30 minutes temporal resolution. The area is divided into wet and dry regions based on normalized SOM (Nsom) values. Statistical comparison between two groups is conducted based on recorded ground station measurements and the corresponding passive microwave retrievals from TMI overpasses at the respective MESONET station location and time. The zero order error statistics show that the Probability of Detection (POD) for the wet regions (higher Nsom values) is higher than the dry regions. The Falls Alarm Ratio (FAR) and volumetric FAR is lower for the wet regions. The volumetric missed rain for the wet region is lower than dry region. Analysis of the MESONET-to-TMI ratio values shows that TMI tends to overestimate for surface rainfall intensities less than 12 (mm/h), however the magnitude of the overestimation over the wet regions is lower than the dry regions.

  10. Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments

    Science.gov (United States)

    Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.

    2017-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both

  11. Automatic cortical surface reconstruction of high-resolution T1 echo planar imaging data

    OpenAIRE

    Renvall, Ville; Witzel, Thomas; Wald, Lawrence L.; Polimeni, Jonathan R.

    2016-01-01

    Echo planar imaging (EPI) is the method of choice for the majority of functional magnetic resonance imaging (fMRI), yet EPI is prone to geometric distortions and thus misaligns with conventional anatomical reference data. The poor geometric correspondence between functional and anatomical data can lead to severe misplacements and corruption of detected activation patterns. However, recent advances in imaging technology have provided EPI data with increasing quality and resolution. Here we pre...

  12. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution.

    Science.gov (United States)

    Zermoglio, Paula F; Guralnick, Robert P; Wieczorek, John R

    2016-01-01

    Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume) and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data.

  13. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution.

    Directory of Open Access Journals (Sweden)

    Paula F Zermoglio

    Full Text Available Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data.

  14. Selected Private Higher Educational Institutions in Metro Manila: A DEA Efficiency Measurement

    Science.gov (United States)

    de Guzman, Maria Corazon Gwendolyn N.; Cabana, Emilyn

    2009-01-01

    This paper measures the technical efficiency of 16 selected colleges and universities in Metro Manila, Philippines, using academic data for the SY 2001-2005. Using the data envelopment analysis (DEA), on average, schools posted 0.807 index score and need additional 19.3% efficiency growth to be efficient. Overall, there are top four efficient…

  15. Neutron powder diffraction at a pulsed neutron source: a study of resolution effects

    International Nuclear Information System (INIS)

    Faber, J. Jr.; Hitterman, R.L.

    1985-11-01

    The General Purpose Powder Diffractometer (GPPD), a high resolution (Δd/d = 0.002) time-of-flight instrument, exhibits a resolution function that is almost independent of d-spacing. Some of the special properties of time-of-flight scattering data obtained at a pulsed neutron source will be discussed. A method is described that transforms wavelength dependent data, obtained at a pulsed neutron source, so that standard structural least-squares analyses can be applied. Several criteria are given to show when these techniques are useful in time-of-flight data analysis. 14 refs., 6 figs., 1 tab

  16. ASTER-Derived 30-Meter-Resolution Digital Elevation Models of Afghanistan

    Science.gov (United States)

    Chirico, Peter G.; Warner, Michael B.

    2007-01-01

    INTRODUCTION The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is an imaging instrument aboard the Terra satellite, launched on December 19, 1999, as part of the National Aeronautics and Space Administration's (NASA) Earth Observing System (EOS). The ASTER sensor consists of three subsystems: the visible and near infrared (VNIR), the shortwave infrared (SWIR), and the thermal infrared (TIR), each with a different spatial resolution (VNIR, 15 meters; SWIR, 30 meters, TIR 90 meters). The VNIR system has the capability to generate along-track stereo images that can be used to create digital elevation models (DEMs) at 30-meter resolution. Currently, the only available DEM dataset for Afghanistan is the 90-meter-resolution Shuttle Radar Topography Mission (SRTM) data. This dataset is appropriate for macroscale DEM analysis and mapping. However, ASTER provides a low cost opportunity to generate higher resolution data. For this publication, study areas were identified around populated areas and areas where higher resolution elevation data were desired to assist in natural resource assessments. The higher resolution fidelity of these DEMs can also be used for other terrain analysis including landform classification and geologic structure analysis. For this publication, ASTER scenes were processed and mosaicked to generate 36 DEMs which were created and extracted using PCI Geomatics' OrthoEngine 3D Stereo software. The ASTER images were geographically registered to Landsat data with at least 15 accurate and well distributed ground control points with a root mean square error (RMSE) of less that one pixel (15 meters). An elevation value was then assigned to each ground control point by extracting the elevation from the 90-meter SRTM data. The 36 derived DEMs demonstrate that the software correlated on nearly flat surfaces and smooth slopes accurately. Larger errors occur in cloudy and snow-covered areas, lakes, areas with steep slopes, and

  17. Detection of proximal caries using digital radiographic systems with different resolutions.

    Science.gov (United States)

    Nikneshan, Sima; Abbas, Fatemeh Mashhadi; Sabbagh, Sedigheh

    2015-01-01

    Dental radiography is an important tool for detection of caries and digital radiography is the latest advancement in this regard. Spatial resolution is a characteristic of digital receptors used for describing the quality of images. This study was aimed to compare the diagnostic accuracy of two digital radiographic systems with three different resolutions for detection of noncavitated proximal caries. Diagnostic accuracy. Seventy premolar teeth were mounted in 14 gypsum blocks. Digora; Optime and RVG Access were used for obtaining digital radiographs. Six observers evaluated the proximal surfaces in radiographs for each resolution in order to determine the depth of caries based on a 4-point scale. The teeth were then histologically sectioned, and the results of histologic analysis were considered as the gold standard. Data were entered using SPSS version 18 software and the Kruskal-Wallis test was used for data analysis. P detection of proximal caries (P > 0.05). RVG access system had the highest specificity (87.7%) and Digora; Optime at high resolution had the lowest specificity (84.2%). Furthermore, Digora; Optime had higher sensitivity for detection of caries exceeding outer half of enamel. Judgment of oral radiologists for detection of the depth of caries had higher reliability than that of restorative dentistry specialists. The three resolutions of Digora; Optime and RVG access had similar accuracy in detection of noncavitated proximal caries.

  18. Generation of High-Resolution Geo-referenced Photo-Mosaics From Navigation Data

    Science.gov (United States)

    Delaunoy, O.; Elibol, A.; Garcia, R.; Escartin, J.; Fornari, D.; Humphris, S.

    2006-12-01

    Optical images of the ocean floor are a rich source of data to understand biological and geological processes. However, due to the attenuation of light in sea water, the area covered by the optical systems is very reduced, and a large number of images are then needed in order to cover an area of interest, as individually they do not provide a global view of the surveyed area. Therefore, generating a composite view (or photo-mosaic) from multiple overlapping images is usually the most practical and flexible solution to visually cover a wide area, allowing the analysis of the site in one single representation of the ocean floor. In most of the camera surveys which are carried out nowadays, some sort of positioning information is available (e.g., USBL, DVL, INS, gyros, etc). If it is a towed camera an estimation of the length of the tether and the mother ship GPS reading could also serve as navigation data. In any case, a photo-mosaic can be build just by taking into account the position and orientation of the camera. On the other hand, most of the regions of interest for the scientific community are quite large (>1Km2) and since better resolution is always required, the final photo-mosaic can be very large (>1,000,000 × 1,000,000 pixels), and cannot be handled by commonly available software. For this reason, we have developed a software package able to load a navigation file and the sequence of acquired images to automatically build a geo-referenced mosaic. This navigated mosaic provides a global view of the interest site, at the maximum available resolution. The developed package includes a viewer, allowing the user to load, view and annotate these geo-referenced photo-mosaics on a personal computer. A software library has been developed to allow the viewer to manage such very big images. Therefore, the size of the resulting mosaic is now only limited by the size of the hard drive. Work is being carried out to apply image processing techniques to the navigated

  19. Age-dependent effects on social interaction of NMDA GluN2A receptor subtype-selective antagonism.

    Science.gov (United States)

    Green, Torrian L; Burket, Jessica A; Deutsch, Stephen I

    2016-07-01

    NMDA receptor-mediated neurotransmission is implicated in the regulation of normal sociability in mice. The heterotetrameric NMDA receptor is composed of two obligatory GluN1 and either two "modulatory" GluN2A or GluN2B receptor subunits. GluN2A and GluN2B-containing receptors differ in terms of their developmental expression, distribution between synaptic and extrasynaptic locations, and channel kinetic properties, among other differences. Because age-dependent differences in disruptive effects of GluN2A and GluN2B subtype-selective antagonists on sociability and locomotor activity have been reported in rats, the current investigation explored age-dependent effects of PEAQX, a GluN2A subtype-selective antagonist, on sociability, stereotypic behaviors emerging during social interaction, and spatial working memory in 4- and 8-week old male Swiss Webster mice. The data implicate an age-dependent contribution of GluN2A-containing NMDA receptors to the regulation of normal social interaction in mice. Specifically, at a dose of PEAQX devoid of any effect on locomotor activity and mouse rotarod performance, the social interaction of 8-week old mice was disrupted without any effect on the social salience of a stimulus mouse. Moreover, PEAQX attenuated stereotypic behavior emerging during social interaction in 4- and 8-week old mice. However, PEAQX had no effect on spontaneous alternations, a measure of spatial working memory, suggesting that neural circuits mediating sociability and spatial working memory may be discrete and dissociable from each other. Also, the data suggest that the regulation of stereotypic behaviors and sociability may occur independently of each other. Because expression of GluN2A-containing NMDA receptors occurs at a later developmental stage, they may be more involved in mediating the pathogenesis of ASDs in patients with histories of "regression" after a period of normal development than GluN2B receptors. Copyright © 2016 Elsevier Inc. All rights

  20. Data Matching Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection

    CERN Document Server

    Christen, Peter

    2012-01-01

    Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of da

  1. Clickstream Data Yields High-Resolution Maps of Science

    Science.gov (United States)

    Bollen, Johan; Van de Sompel, Herbert; Rodriguez, Marko A.; Balakireva, Lyudmila

    2009-01-01

    Background Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Methodology Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Conclusions Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data. PMID:19277205

  2. Clickstream data yields high-resolution maps of science.

    Directory of Open Access Journals (Sweden)

    Johan Bollen

    Full Text Available BACKGROUND: Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. METHODOLOGY: Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. CONCLUSIONS: Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data.

  3. Sensitivity of point scale surface runoff predictions to rainfall resolution

    Directory of Open Access Journals (Sweden)

    A. J. Hearman

    2007-01-01

    Full Text Available This paper investigates the effects of using non-linear, high resolution rainfall, compared to time averaged rainfall on the triggering of hydrologic thresholds and therefore model predictions of infiltration excess and saturation excess runoff at the point scale. The bounded random cascade model, parameterized to three locations in Western Australia, was used to scale rainfall intensities at various time resolutions ranging from 1.875 min to 2 h. A one dimensional, conceptual rainfall partitioning model was used that instantaneously partitioned water into infiltration excess, infiltration, storage, deep drainage, saturation excess and surface runoff, where the fluxes into and out of the soil store were controlled by thresholds. The results of the numerical modelling were scaled by relating soil infiltration properties to soil draining properties, and in turn, relating these to average storm intensities. For all soil types, we related maximum infiltration capacities to average storm intensities (k* and were able to show where model predictions of infiltration excess were most sensitive to rainfall resolution (ln k*=0.4 and where using time averaged rainfall data can lead to an under prediction of infiltration excess and an over prediction of the amount of water entering the soil (ln k*>2 for all three rainfall locations tested. For soils susceptible to both infiltration excess and saturation excess, total runoff sensitivity was scaled by relating drainage coefficients to average storm intensities (g* and parameter ranges where predicted runoff was dominated by infiltration excess or saturation excess depending on the resolution of rainfall data were determined (ln g*<2. Infiltration excess predicted from high resolution rainfall was short and intense, whereas saturation excess produced from low resolution rainfall was more constant and less intense. This has important implications for the accuracy of current hydrological models that use time

  4. Facilitating Innovations in Higher Education in Transition Economies

    Science.gov (United States)

    Saginova, Olga; Belyansky, Vladimir

    2008-01-01

    Purpose: The purpose of this paper is to analyse innovations in education from the point of view of product content and markets selected. Emerging market economies face a number of problems many of which are closely linked to and dependent upon the effectiveness of higher professional education. External environment changes, such as the formation…

  5. Multi-component transport in polymers: hydrocarbon / hydrogen separation by reverse selectivity membrane; Transport multi-composants dans les polymeres: separation hydrocarbures / hydrogene par membrane a selectivite inverse

    Energy Technology Data Exchange (ETDEWEB)

    Mauviel, G.

    2003-12-15

    Hydrogen separation by reverse selectivity membranes is investigated. The first goal is to develop materials showing an increased selectivity. Silicone membranes loaded with inorganic fillers have been prepared, but the expected enhancement is not observed. The second goal is to model the multi- component transport through rubbers. Indeed the permeability model is not able to predict correctly permeation when a vapour is present. Thus many phenomena have to be considered: diffusional inter-dependency, sorption synergy, membrane swelling and drag effect. The dependence of diffusivities with the local composition is modelled according to free-volume theory. The model resolution allows to predict the permeation flow-rates of mixed species from their pure sorption and diffusion data. For the systems under consideration, the diffusional inter-dependency is shown to be preponderant. Besides, sorption synergy importance is pointed out, whereas it is most often neglected. (author)

  6. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    Science.gov (United States)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data

  7. Constraints on cosmological birefringence energy dependence from CMB polarization data

    International Nuclear Information System (INIS)

    Gubitosi, G.; Paci, F.

    2013-01-01

    We study the possibility of constraining the energy dependence of cosmological birefringence by using CMB polarization data. We consider four possible behaviors, characteristic of different theoretical scenarios: energy-independent birefringence motivated by Chern-Simons interactions of the electromagnetic field, linear energy dependence motivated by a 'Weyl' interaction of the electromagnetic field, quadratic energy dependence, motivated by quantum gravity modifications of low-energy electrodynamics, and inverse quadratic dependence, motivated by Faraday rotation generated by primordial magnetic fields. We constrain the parameters associated to each kind of dependence and use our results to give constraints on the models mentioned. We forecast the sensitivity that Planck data will be able to achieve in this respect

  8. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    Science.gov (United States)

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  9. Productivity effects of higher education human capital in selected countries of Sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Koye Gerry Bokana

    2017-06-01

    Full Text Available This study aimed to analyse the productivity effects of higher education enrolment (HEE, higher education output (HEO and the associated productivity gap (GP on selected countries in Sub-Saharan Africa (SSA over the period between 1981 and 2014. It was hypothesized in the study that HEE and HEO had statistically significant positive impact on productivity in the selected sub-Saharan Africa countries over the stated period. Fixed effect Least Square Dummy Variable (LSDV and a robust version of System Generalized Methods of Moment (SYSGMM were adopted as model estimating techniques. Results from the LSDV model indicated that HEE had no statistically significant positive impact on productivity growth in the twenty-one SSA countries. This non-significance was corrected in the dynamic model, but with negative effects on the growth rate of total factor productivity (TFP. The study further compared the worldwide technological frontier with those of the SSA countries under investigation and discovered that countries like Gabon, Mauritius and Swaziland ranked high, while Burundi needs to improve on its productivity determinants. The major conclusion of this study is therefore that higher education human capital should be supported with strong policy implementation, as this can have a positive impact on productivity growth.

  10. MERGING AIRBORNE LIDAR DATA AND SATELLITE SAR DATA FOR BUILDING CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    T. Yamamoto

    2015-05-01

    Full Text Available A frequent map revision is required in GIS applications, such as disaster prevention and urban planning. In general, airborne photogrammetry and LIDAR measurements are applied to geometrical data acquisition for automated map generation and revision. However, attribute data acquisition and classification depend on manual editing works including ground surveys. In general, airborne photogrammetry and LiDAR measurements are applied to geometrical data acquisition for automated map generation and revision. However, these approaches classify geometrical attributes. Moreover, ground survey and manual editing works are finally required in attribute data classification. On the other hand, although geometrical data extraction is difficult, SAR data have a possibility to automate the attribute data acquisition and classification. The SAR data represent microwave reflections on various surfaces of ground and buildings. There are many researches related to monitoring activities of disaster, vegetation, and urban. Moreover, we have an opportunity to acquire higher resolution data in urban areas with new sensors, such as ALOS2 PALSAR2. Therefore, in this study, we focus on an integration of airborne LIDAR data and satellite SAR data for building extraction and classification.

  11. Dependent data in social sciences research forms, issues, and methods of analysis

    CERN Document Server

    Eye, Alexander; Wiedermann, Wolfgang

    2015-01-01

    This volume presents contributions on handling data in which the postulate of independence in the data matrix is violated. When this postulate is violated and when the methods assuming independence are still applied, the estimated parameters are likely to be biased, and statistical decisions are very likely to be incorrect. Problems associated with dependence in data have been known for a long time, and led to the development of tailored methods for the analysis of dependent data in various areas of statistical analysis. These methods include, for example, methods for the analysis of longitudinal data, corrections for dependency, and corrections for degrees of freedom. This volume contains the following five sections: growth curve modeling, directional dependence, dyadic data modeling, item response modeling (IRT), and other methods for the analysis of dependent data (e.g., approaches for modeling cross-section dependence, multidimensional scaling techniques, and mixed models). Researchers and graduate stud...

  12. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  13. Multistate and phase change selection in constitutional multivalent systems.

    Science.gov (United States)

    Barboiu, Mihail

    2012-01-01

    Molecular architectures and materials can be constitutionally self-sorted in the presence of different biomolecular targets or external physical stimuli or chemical effectors, thus responding to an external selection pressure. The high selectivity and specificity of different bioreceptors or self-correlated internal interactions may be used to describe the complex constitutional behaviors through multistate component selection from a dynamic library. The self-selection may result in the dynamic amplification of self-optimized architectures during the phase change process. The sol-gel resolution of dynamic molecular/supramolecular libraries leads to higher self-organized constitutional hybrid materials, in which organic (supramolecular)/inorganic domains are reversibily connected.

  14. National Hydrography Dataset Plus High Resolution (NHDPlus HR) - USGS National Map Downloadable Data Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The High Resolution National Hydrography Dataset Plus (NHDPlus HR) is an integrated set of geospatial data layers, including the best available National Hydrography...

  15. Selective Redundancy Removal: A Framework for Data Hiding

    Directory of Open Access Journals (Sweden)

    Ugo Fiore

    2010-02-01

    Full Text Available Data hiding techniques have so far concentrated on adding or modifying irrelevant information in order to hide a message. However, files in widespread use, such as HTML documents, usually exhibit high redundancy levels, caused by code-generation programs. Such redundancy may be removed by means of optimization software. Redundancy removal, if applied selectively, enables information hiding. This work introduces Selective Redundancy Removal (SRR as a framework for hiding data. An example application of the framework is given in terms of hiding information in HTML documents. Non-uniformity across documents may raise alarms. Nevertheless, selective application of optimization techniques might be due to the legitimate use of optimization software not supporting all the optimization methods, or configured to not use all of them.

  16. Dynamic Service Selection in Workflows Using Performance Data

    Directory of Open Access Journals (Sweden)

    David W. Walker

    2007-01-01

    Full Text Available An approach to dynamic workflow management and optimisation using near-realtime performance data is presented. Strategies are discussed for choosing an optimal service (based on user-specified criteria from several semantically equivalent Web services. Such an approach may involve finding "similar" services, by first pruning the set of discovered services based on service metadata, and subsequently selecting an optimal service based on performance data. The current implementation of the prototype workflow framework is described, and demonstrated with a simple workflow. Performance results are presented that show the performance benefits of dynamic service selection. A statistical analysis based on the first order statistic is used to investigate the likely improvement in service response time arising from dynamic service selection.

  17. CROSS CULTURAL CONFLICT RESOLUTION STYLES: DATA REVISITED

    Directory of Open Access Journals (Sweden)

    Nuray ALAGÖZLÜ

    2017-07-01

    Full Text Available The way conflicts are solved is thought to be culturally learned (Hammer, 2005; therefore, this is reflected through language use. Conflicts, as inevitable parts of communication, naturally mirror cultural differences. Intercultural conflict styles have been studied so far by various researchers. How conflicts are initiated, maintained and escalated or terminated are all culture bound (Leung, 2002 and all the related stages vary from one culture to another. In the related literature, there have been attempts to describe different conflict handling classifications. Using Hammer’s (2005 categorization that was found to be more refined and summative, conflict resolution styles of Turkish and American College students were explored using Discourse Completion Tests (DCT with eight conflict situations where the respondents were required to write verbal solutions to overcome the conflicts described in the test. Those utterances were categorized according to Directness/Indirectness Scale modified from Hammer’s (2005 “International Conflict Style Inventory (ICSI” that classifies intercultural conflict resolution styles as high/low level of directness and high/low level of emotional expressiveness. It is believed that the study provides insight into intercultural communication as there are culturally generalizable (etic and learned patterns of conflict resolution styles pertinent to different cultures (Hammer, 2009, p. 223; Ting-Toomey, 1994.

  18. Data center thermal management

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei

    2016-02-09

    Historical high-spatial-resolution temperature data and dynamic temperature sensor measurement data may be used to predict temperature. A first formulation may be derived based on the historical high-spatial-resolution temperature data for determining a temperature at any point in 3-dimensional space. The dynamic temperature sensor measurement data may be calibrated based on the historical high-spatial-resolution temperature data at a corresponding historical time. Sensor temperature data at a plurality of sensor locations may be predicted for a future time based on the calibrated dynamic temperature sensor measurement data. A three-dimensional temperature spatial distribution associated with the future time may be generated based on the forecasted sensor temperature data and the first formulation. The three-dimensional temperature spatial distribution associated with the future time may be projected to a two-dimensional temperature distribution, and temperature in the future time for a selected space location may be forecasted dynamically based on said two-dimensional temperature distribution.

  19. Next-to-next-to-leading order QCD analysis of combined data for xF3 structure function and higher-twist contribution

    International Nuclear Information System (INIS)

    Sidorov, A.V.

    1996-01-01

    The simultaneous QCD analysis of the xF 3 structure function measured in deep-inelastic scattering by several collaborations is done up to 3-loop order of QCD. The x dependence of the higher-twist contribution is evaluated and turns out to be in a qualitative agreement with the results of 'old' CCFR data analysis and with renormalon approach predictions. The Gross-Llewellyn Smith sum rule and its higher-twist corrections are evaluated. 32 refs., 1 figs., 1 tab

  20. Early detection of tuberculosis outbreaks among the San Francisco homeless: trade-offs between spatial resolution and temporal scale.

    Directory of Open Access Journals (Sweden)

    Brandon W Higgs

    Full Text Available BACKGROUND: San Francisco has the highest rate of tuberculosis (TB in the U.S. with recurrent outbreaks among the homeless and marginally housed. It has been shown for syndromic data that when exact geographic coordinates of individual patients are used as the spatial base for outbreak detection, higher detection rates and accuracy are achieved compared to when data are aggregated into administrative regions such as zip codes and census tracts. We examine the effect of varying the spatial resolution in the TB data within the San Francisco homeless population on detection sensitivity, timeliness, and the amount of historical data needed to achieve better performance measures. METHODS AND FINDINGS: We apply a variation of space-time permutation scan statistic to the TB data in which a patient's location is either represented by its exact coordinates or by the centroid of its census tract. We show that the detection sensitivity and timeliness of the method generally improve when exact locations are used to identify real TB outbreaks. When outbreaks are simulated, while the detection timeliness is consistently improved when exact coordinates are used, the detection sensitivity varies depending on the size of the spatial scanning window and the number of tracts in which cases are simulated. Finally, we show that when exact locations are used, smaller amount of historical data is required for training the model. CONCLUSION: Systematic characterization of the spatio-temporal distribution of TB cases can widely benefit real time surveillance and guide public health investigations of TB outbreaks as to what level of spatial resolution results in improved detection sensitivity and timeliness. Trading higher spatial resolution for better performance is ultimately a tradeoff between maintaining patient confidentiality and improving public health when sharing data. Understanding such tradeoffs is critical to managing the complex interplay between public