WorldWideScience

Sample records for point-to-point spatial resolution

  1. Residual analysis for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Turner, R.; Møller, Jesper

    We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....

  2. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  3. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    , and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can......This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  4. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  5. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    2010-01-01

    are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  6. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    Science.gov (United States)

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  7. Characterizing the Motion of Solar Magnetic Bright Points at High Resolution

    Science.gov (United States)

    Van Kooten, Samuel J.; Cranmer, Steven R.

    2017-11-01

    Magnetic bright points in the solar photosphere, visible in both continuum and G-band images, indicate footpoints of kilogauss magnetic flux tubes extending to the corona. The power spectrum of bright-point motion is thus also the power spectrum of Alfvén wave excitation, transporting energy up flux tubes into the corona. This spectrum is a key input in coronal and heliospheric models. We produce a power spectrum of bright-point motion using radiative magnetohydrodynamic simulations, exploiting spatial resolution higher than can be obtained in present-day observations, while using automated tracking to produce large data quantities. We find slightly higher amounts of power at all frequencies compared to observation-based spectra, while confirming the spectrum shape of recent observations. This also provides a prediction for observations of bright points with DKIST, which will achieve similar resolution and high sensitivity. We also find a granule size distribution in support of an observed two-population distribution, and we present results from tracking passive tracers, which show a similar power spectrum to that of bright points. Finally, we introduce a simplified, laminar model of granulation, with which we explore the roles of turbulence and of the properties of the granulation pattern in determining bright-point motion.

  8. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  9. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  10. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is

  11. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel

    2016-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software package PySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described

  12. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  13. Pointing Hand Stimuli Induce Spatial Compatibility Effects and Effector Priming

    Directory of Open Access Journals (Sweden)

    Akio eNishimura

    2013-04-01

    Full Text Available The present study investigated the automatic influence of perceiving a picture that indicates other’s action on one’s own task performance in terms of spatial compatibility and effector priming. Participants pressed left and right buttons with their left and right hands respectively, depending on the color of a central dot target. Preceding the target, a left or right hand stimulus (pointing either to the left or right with the index or little finger was presented. In Experiment 1, with brief presentation of the pointing hand, a spatial compatibility effect was observed: Responses were faster when the direction of the pointed finger and the response position were spatially congruent than when incongruent. The spatial compatibility effect was larger for the pointing index finger stimulus compared to the pointing little finger stimulus. Experiment 2 employed longer duration of the pointing hand stimuli. In addition to the spatial compatibility effect for the pointing index finger, the effector priming effect was observed: Responses were faster when the anatomical left/right identity of the pointing and response hands matched than when the pointing and response hands differed in left/right identity. The results indicate that with sufficient processing time, both spatial/symbolic and anatomical features of a static body part implying another’s action simultaneously influence different aspects of the perceiver’s own action. Hierarchical coding, according to which an anatomical code is used only when a spatial code is unavailable, may not be applicable if stimuli as well as responses contain anatomical features.

  14. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    Science.gov (United States)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  15. Limits in point to point resolution of MOS based pixels detector arrays

    Science.gov (United States)

    Fourches, N.; Desforge, D.; Kebbiri, M.; Kumar, V.; Serruys, Y.; Gutierrez, G.; Leprêtre, F.; Jomard, F.

    2018-01-01

    In high energy physics point-to-point resolution is a key prerequisite for particle detector pixel arrays. Current and future experiments require the development of inner-detectors able to resolve the tracks of particles down to the micron range. Present-day technologies, although not fully implemented in actual detectors, can reach a 5-μm limit, this limit being based on statistical measurements, with a pixel-pitch in the 10 μm range. This paper is devoted to the evaluation of the building blocks for use in pixel arrays enabling accurate tracking of charged particles. Basing us on simulations we will make here a quantitative evaluation of the physical and technological limits in pixel size. Attempts to design small pixels based on SOI technology will be briefly recalled here. A design based on CMOS compatible technologies that allow a reduction of the pixel size below the micrometer is introduced here. Its physical principle relies on a buried carrier-localizing collecting gate. The fabrication process needed by this pixel design can be based on existing process steps used in silicon microelectronics. The pixel characteristics will be discussed as well as the design of pixel arrays. The existing bottlenecks and how to overcome them will be discussed in the light of recent ion implantation and material characterization experiments.

  16. Modern Statistics for Spatial Point Processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus

    2007-01-01

    We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...

  17. Modern statistics for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus

    We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...

  18. A tutorial on Palm distributions for spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...

  19. Optimal configuration of spatial points in the reactor cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1968-01-01

    Optimal configuration of spatial points was chosen in respect to the total number needed for integration of reactions in the reactor cell. Previously developed code VESTERN was used for numerical verification of the method on a standard reactor cell. The code applies the collision probability method for calculating the neutron flux distribution. It is shown that the total number of spatial points is twice smaller than the respective number of spatial zones needed for determination of number of reactions in the cell, with the preset precision. This result shows the direction for further condensing of the procedure for calculating the space-energy distribution of the neutron flux in a reactors cell [sr

  20. A multi-resolution HEALPix data structure for spherically mapped point data

    Directory of Open Access Journals (Sweden)

    Robert W. Youngren

    2017-06-01

    Full Text Available Data describing entities with locations that are points on a sphere are described as spherically mapped. Several data structures designed for spherically mapped data have been developed. One of them, known as Hierarchical Equal Area iso-Latitude Pixelization (HEALPix, partitions the sphere into twelve diamond-shaped equal-area base cells and then recursively subdivides each cell into four diamond-shaped subcells, continuing to the desired level of resolution. Twelve quadtrees, one associated with each base cell, store the data records associated with that cell and its subcells.HEALPix has been used successfully for numerous applications, notably including cosmic microwave background data analysis. However, for applications involving sparse point data HEALPix has possible drawbacks, including inefficient memory utilization, overwriting of proximate points, and return of spurious points for certain queries.A multi-resolution variant of HEALPix specifically optimized for sparse point data was developed. The new data structure allows different areas of the sphere to be subdivided at different levels of resolution. It combines HEALPix positive features with the advantages of multi-resolution, including reduced memory requirements and improved query performance.An implementation of the new Multi-Resolution HEALPix (MRH data structure was tested using spherically mapped data from four different scientific applications (warhead fragmentation trajectories, weather station locations, galaxy locations, and synthetic locations. Four types of range queries were applied to each data structure for each dataset. Compared to HEALPix, MRH used two to four orders of magnitude less memory for the same data, and on average its queries executed 72% faster. Keywords: Computer science

  1. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  2. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    Science.gov (United States)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  3. Femtosecond photoelectron point projection microscope

    International Nuclear Information System (INIS)

    Quinonez, Erik; Handali, Jonathan; Barwick, Brett

    2013-01-01

    By utilizing a nanometer ultrafast electron source in a point projection microscope we demonstrate that images of nanoparticles with spatial resolutions of the order of 100 nanometers can be obtained. The duration of the emission process of the photoemitted electrons used to make images is shown to be of the order of 100 fs using an autocorrelation technique. The compact geometry of this photoelectron point projection microscope does not preclude its use as a simple ultrafast electron microscope, and we use simple analytic models to estimate temporal resolutions that can be expected when using it as a pump-probe ultrafast electron microscope. These models show a significant increase in temporal resolution when comparing to ultrafast electron microscopes based on conventional designs. We also model the microscopes spectroscopic abilities to capture ultrafast phenomena such as the photon induced near field effect

  4. Development and evaluation of spatial point process models for epidermal nerve fibers.

    Science.gov (United States)

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    Science.gov (United States)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  6. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    Science.gov (United States)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N

  7. Shot-noise-weighted processes : a new family of spatial point processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); I.S. Molchanov (Ilya)

    1995-01-01

    textabstractThe paper suggests a new family of of spatial point processes distributions. They are defined by means of densities with respect to the Poisson point process within a bounded set. These densities are given in terms of a functional of the shot-noise process with a given influence

  8. Discussion of "Modern statistics for spatial point processes"

    DEFF Research Database (Denmark)

    Jensen, Eva Bjørn Vedel; Prokesová, Michaela; Hellmund, Gunnar

    2007-01-01

    ABSTRACT. The paper ‘Modern statistics for spatial point processes’ by Jesper Møller and Rasmus P. Waagepetersen is based on a special invited lecture given by the authors at the 21st Nordic Conference on Mathematical Statistics, held at Rebild, Denmark, in June 2006. At the conference, Antti...

  9. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  10. An Automated Technique for Generating Georectified Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV Imagery, Based on Structure from Motion (SfM Point Clouds

    Directory of Open Access Journals (Sweden)

    Christopher Watson

    2012-05-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are an exciting new remote sensing tool capable of acquiring high resolution spatial data. Remote sensing with UAVs has the potential to provide imagery at an unprecedented spatial and temporal resolution. The small footprint of UAV imagery, however, makes it necessary to develop automated techniques to geometrically rectify and mosaic the imagery such that larger areas can be monitored. In this paper, we present a technique for geometric correction and mosaicking of UAV photography using feature matching and Structure from Motion (SfM photogrammetric techniques. Images are processed to create three dimensional point clouds, initially in an arbitrary model space. The point clouds are transformed into a real-world coordinate system using either a direct georeferencing technique that uses estimated camera positions or via a Ground Control Point (GCP technique that uses automatically identified GCPs within the point cloud. The point cloud is then used to generate a Digital Terrain Model (DTM required for rectification of the images. Subsequent georeferenced images are then joined together to form a mosaic of the study area. The absolute spatial accuracy of the direct technique was found to be 65–120 cm whilst the GCP technique achieves an accuracy of approximately 10–15 cm.

  11. Influence of Signal-to-Noise Ratio and Point Spread Function on Limits of Super-Resolution

    NARCIS (Netherlands)

    Pham, T.Q.; Vliet, L.J. van; Schutte, K.

    2005-01-01

    This paper presents a method to predict the limit of possible resolution enhancement given a sequence of low resolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

  12. On tests of randomness for spatial point patterns

    International Nuclear Information System (INIS)

    Doguwa, S.I.

    1990-11-01

    New tests of randomness for spatial point patterns are introduced. These test statistics are then compared in a power study with the existing alternatives. These results of the power study suggest that one of the tests proposed is extremely powerful against both aggregated and regular alternatives. (author). 9 refs, 7 figs, 3 tabs

  13. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  14. Interesting Interest Points

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup

    2012-01-01

    on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard......Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based...... position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale...

  15. Resolution enhancement of scanning four-point-probe measurements on two-dimensional systems

    DEFF Research Database (Denmark)

    Hansen, Torben Mikael; Stokbro, Kurt; Hansen, Ole

    2003-01-01

    A method to improve the resolution of four-point-probe measurements of two-dimensional (2D) and quasi-2D systems is presented. By mapping the conductance on a dense grid around a target area and postprocessing the data, the resolution can be improved by a factor of approximately 50 to better than 1....../15 of the four-point-probe electrode spacing. The real conductance sheet is simulated by a grid of discrete resistances, which is optimized by means of a standard optimization algorithm, until the simulated voltage-to-current ratios converges with the measurement. The method has been tested against simulated...

  16. Hierarchical spatial point process analysis for a plant community with high biodiversity

    DEFF Research Database (Denmark)

    Illian, Janine B.; Møller, Jesper; Waagepetersen, Rasmus

    2009-01-01

    A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maxim...

  17. Super-resolution for a point source better than λ/500 using positive refraction

    Science.gov (United States)

    Miñano, Juan C.; Marqués, Ricardo; González, Juan C.; Benítez, Pablo; Delgado, Vicente; Grabovickic, Dejan; Freire, Manuel

    2011-12-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out.

  18. Super-resolution for a point source better than λ/500 using positive refraction

    International Nuclear Information System (INIS)

    Miñano, Juan C; González, Juan C; Benítez, Pablo; Grabovickic, Dejan; Marqués, Ricardo; Delgado, Vicente; Freire, Manuel

    2011-01-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out. (paper)

  19. Super-resolution for a point source using positive refraction

    Science.gov (United States)

    Miñano, Juan C.; Benítez, Pablo; González, Juan C.; Grabovičkić, Dejan; Ahmadpanahi, Hamed

    Leonhardt demonstrated (2009) that the 2D Maxwell Fish Eye lens (MFE) can focus perfectly 2D Helmholtz waves of arbitrary frequency, i.e., it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a receptor called "perfect drain" (PD) located at the corresponding MFE image point. The PD has the property of absorbing the complete radiation without radiation or scattering and it has been claimed as necessary to obtain super-resolution (SR) in the MFE. However, a prototype using a "drain" different from the PD has shown λ/5 resolution for microwave frequencies (Ma et al, 2010). Recently, the SR properties of a device equivalent to the MFE, called the Spherical Geodesic Waveguide (SGW) (Miñano et al, 2012) have been analyzed. The reported results show resolution up to λ /3000, for the SGW loaded with the perfect drain, and up to λ /500 for the SGW without perfect drain. The perfect drain was realized as a coaxial probe loaded with properly calculated impedance. The SGW provides SR only in a narrow band of frequencies close to the resonance Schumann frequencies. Here we analyze the SGW loaded with a small "perfect drain region" (González et al, 2011). This drain is designed as a region made of a material with complex permittivity. The comparative results show that there is no significant difference in the SR properties for both perfect drain designs.

  20. High-resolution wave number spectrum using multi-point measurements in space – the Multi-point Signal Resonator (MSR technique

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2011-02-01

    Full Text Available A new analysis method is presented that provides a high-resolution power spectrum in a broad wave number domain based on multi-point measurements. The analysis technique is referred to as the Multi-point Signal Resonator (MSR and it benefits from Capon's minimum variance method for obtaining the proper power spectral density of the signal as well as the MUSIC algorithm (Multiple Signal Classification for considerably reducing the noise part in the spectrum. The mathematical foundation of the analysis method is presented and it is applied to synthetic data as well as Cluster observations of the interplanetary magnetic field. Using the MSR technique for Cluster data we find a wave in the solar wind propagating parallel to the mean magnetic field with relatively small amplitude, which is not identified by the Capon spectrum. The Cluster data analysis shows the potential of the MSR technique for studying waves and turbulence using multi-point measurements.

  1. Investigation of spatial resolution dependent variability in transcutaneous oxygen saturation using point spectroscopy system

    Science.gov (United States)

    Philimon, Sheena P.; Huong, Audrey K. C.; Ngu, Xavier T. I.

    2017-08-01

    This paper aims to investigate the variation in one’s percent mean transcutaneous oxygen saturation (StO2) with differences in spatial resolution of data. This work required the knowledge of extinction coefficient of hemoglobin derivatives in the wavelength range of 520 - 600 nm to solve for the StO2 value via an iterative fitting procedure. A pilot study was conducted on three healthy subjects with spectroscopic data collected from their right index finger at different arbitrarily selected distances. The StO2 value estimated by Extended Modified Lambert Beer (EMLB) model revealed a higher mean StO2 of 91.1 ± 1.3% at a proximity distance of 30 mm compared to 60.83 ± 2.8% at 200 mm. The results showed a high correlation between data spatial resolution and StO2 value, and revealed a decrease in StO2 value as the sampling distance increased. The preliminary findings from this study contribute to the knowledge of the appropriate distance range for consistent and high repeatability measurement of skin oxygenation.

  2. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2015-08-01

    Full Text Available Three-dimensional (3D point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.

  3. IAU 2015 Resolution B2 on Recommended Zero Points for the Absolute and Apparent Bolometric Magnitude Scales

    DEFF Research Database (Denmark)

    Mamajek, E. E.; Torres, G.; Prsa, A.

    2015-01-01

    The XXIXth IAU General Assembly in Honolulu adopted IAU 2015 Resolution B2 on recommended zero points for the absolute and apparent bolometric magnitude scales. The resolution was proposed by the IAU Inter-Division A-G Working Group on Nominal Units for Stellar and Planetary Astronomy after...... consulting with a broad spectrum of researchers from the astronomical community. Resolution B2 resolves the long-standing absence of an internationally-adopted zero point for the absolute and apparent bolometric magnitude scales. Resolution B2 defines the zero point of the absolute bolometric magnitude scale...... such that a radiation source with $M_{\\rm Bol}$ = 0 has luminosity L$_{\\circ}$ = 3.0128e28 W. The zero point of the apparent bolometric magnitude scale ($m_{\\rm Bol}$ = 0) corresponds to irradiance $f_{\\circ}$ = 2.518021002e-8 W/m$^2$. The zero points were chosen so that the nominal solar luminosity (3.828e26 W...

  4. Spatial point process analysis for a plant community with high biodiversity

    DEFF Research Database (Denmark)

    Illian, Janine; Møller, Jesper; Waagepetersen, Rasmus Plenge

    A complex multivariate spatial point pattern for a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially...... a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both...

  5. Spatially heterogeneous dynamics investigated via a time-dependent four-point density correlation function

    DEFF Research Database (Denmark)

    Lacevic, N.; Starr, F. W.; Schrøder, Thomas

    2003-01-01

    correlation function g4(r,t) and corresponding "structure factor" S4(q,t) which measure the spatial correlations between the local liquid density at two points in space, each at two different times, and so are sensitive to dynamical heterogeneity. We study g4(r,t) and S4(q,t) via molecular dynamics......Relaxation in supercooled liquids above their glass transition and below the onset temperature of "slow" dynamics involves the correlated motion of neighboring particles. This correlated motion results in the appearance of spatially heterogeneous dynamics or "dynamical heterogeneity." Traditional...... two-point time-dependent density correlation functions, while providing information about the transient "caging" of particles on cooling, are unable to provide sufficiently detailed information about correlated motion and dynamical heterogeneity. Here, we study a four-point, time-dependent density...

  6. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....

  7. Spatial judgments in the horizontal and vertical planes from different vantage points.

    Science.gov (United States)

    Prytz, Erik; Scerbo, Mark W

    2012-01-01

    Todorović (2008 Perception 37 106-125) reported that there are systematic errors in the perception of 3-D space when viewing 2-D linear perspective drawings depending on the observer's vantage point. Because these findings were restricted to the horizontal plane, the current study was designed to determine the nature of these errors in the vertical plane. Participants viewed an image containing multiple colonnades aligned on parallel converging lines receding to a vanishing point. They were asked to judge where, in the physical room, the next column should be placed. The results support Todorović in that systematic deviations in the spatial judgments depended on vantage point for both the horizontal and vertical planes. However, there are also marked differences between the two planes. While judgments in both planes failed to compensate adequately for the vantage-point shift, the vertical plane induced greater distortions of the stimulus image itself within each vantage point.

  8. Development of spatial scaling technique of forest health sample point information

    Science.gov (United States)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  9. Influence of signal-to-noise ratio and point spread function on limits of super-resolution

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.; Schutte, K.

    2005-01-01

    This paper presents a method to predict the limit of possible resolution enhancement given a sequence of lowresolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

  10. AKaplan-Meier estimators of distance distributions for spatial point processes

    NARCIS (Netherlands)

    Baddeley, A.J.; Gill, R.D.

    1997-01-01

    When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit

  11. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    2009-01-01

    The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....

  12. Extraction of Features from High-resolution 3D LiDaR Point-cloud Data

    Science.gov (United States)

    Keller, P.; Kreylos, O.; Hamann, B.; Kellogg, L. H.; Cowgill, E. S.; Yikilmaz, M. B.; Hering-Bertram, M.; Hagen, H.

    2008-12-01

    Airborne and tripod-based LiDaR scans are capable of producing new insight into geologic features by providing high-quality 3D measurements of the landscape. High-resolution LiDaR is a promising method for studying slip on faults, erosion, and other landscape-altering processes. LiDaR scans can produce up to several billion individual point returns associated with the reflection of a laser from natural and engineered surfaces; these point clouds are typically used to derive a high-resolution digital elevation model (DEM). Currently, there exist only few methods that can support the analysis of the data at full resolution and in the natural 3D perspective in which it was collected by working directly with the points. We are developing new algorithms for extracting features from LiDaR scans, and present method for determining the local curvature of a LiDaR data set, working directly with the individual point returns of a scan. Computing the curvature enables us to rapidly and automatically identify key features such as ridge-lines, stream beds, and edges of terraces. We fit polynomial surface patches via a moving least squares (MLS) approach to local point neighborhoods, determining curvature values for each point. The size of the local point neighborhood is defined by a user. Since both terrestrial and airborne LiDaR scans suffer from high noise, we apply additional pre- and post-processing smoothing steps to eliminate unwanted features. LiDaR data also captures objects like buildings and trees complicating greatly the task of extracting reliable curvature values. Hence, we use a stochastic approach to determine whether a point can be reliably used to estimate curvature or not. Additionally, we have developed a graph-based approach to establish connectivities among points that correspond to regions of high curvature. The result is an explicit description of ridge-lines, for example. We have applied our method to the raw point cloud data collected as part of the Geo

  13. LIDAR, Point Clouds, and their Archaeological Applications

    Energy Technology Data Exchange (ETDEWEB)

    White, Devin A [ORNL

    2013-01-01

    It is common in contemporary archaeological literature, in papers at archaeological conferences, and in grant proposals to see heritage professionals use the term LIDAR to refer to high spatial resolution digital elevation models and the technology used to produce them. The goal of this chapter is to break that association and introduce archaeologists to the world of point clouds, in which LIDAR is only one member of a larger family of techniques to obtain, visualize, and analyze three-dimensional measurements of archaeological features. After describing how point clouds are constructed, there is a brief discussion on the currently available software and analytical techniques designed to make sense of them.

  14. Spatial interpolation of point velocities in stream cross-section

    Directory of Open Access Journals (Sweden)

    Hasníková Eliška

    2015-03-01

    Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

  15. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  16. Testing of Track Point Resolution of Gas Electron Multiplier with Pion Beam at CERN SPS

    CERN Document Server

    Adak, R P; Das, S; Dubey, A K; Ganti, M S; Saini, J; Singaraju, R

    2015-01-01

    A muon detection system using segmented and instrumented absorber has been designed for high-energy heavy-ion collision experiments to be held at GSI, Darmstadt, Germany. The muon detector system is mounted downstream of a Silicon Tracking System. The reconstructed tracks from the STS are to be matched to the hits in the GEM detector. For reconstructing track in the GEM Detector, track point resolution is an important issue. We report her first time the track point resolution of the GEM detector.

  17. Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger

    We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...

  18. C-point and V-point singularity lattice formation and index sign conversion methods

    Science.gov (United States)

    Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.

    2017-06-01

    The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an

  19. The resolution of field identification fixed points in diagonal coset theories

    International Nuclear Information System (INIS)

    Fuchs, J.; Schellekens, B.; Schweigert, C.

    1995-09-01

    The fixed point resolution problem is solved for diagonal coset theories. The primary fields into which the fixed points are resolved are described by submodules of the branching spaces, obtained as eigenspaces of the automorphisms that implement field identification. To compute the characters and the modular S-matrix we use ''orbit Lie algebras'' and ''twining characters'', which were introduced in a previous paper. The characters of the primary fields are expressed in terms branching functions of twining characters. This allows us to express the modular S-matrix through the S-matrices of the orbit Lie algebras associated to the identification group. Our results can be extended to the larger class of ''generalized diagonal cosets''. (orig.)

  20. Sensitivity of point scale surface runoff predictions to rainfall resolution

    Directory of Open Access Journals (Sweden)

    A. J. Hearman

    2007-01-01

    Full Text Available This paper investigates the effects of using non-linear, high resolution rainfall, compared to time averaged rainfall on the triggering of hydrologic thresholds and therefore model predictions of infiltration excess and saturation excess runoff at the point scale. The bounded random cascade model, parameterized to three locations in Western Australia, was used to scale rainfall intensities at various time resolutions ranging from 1.875 min to 2 h. A one dimensional, conceptual rainfall partitioning model was used that instantaneously partitioned water into infiltration excess, infiltration, storage, deep drainage, saturation excess and surface runoff, where the fluxes into and out of the soil store were controlled by thresholds. The results of the numerical modelling were scaled by relating soil infiltration properties to soil draining properties, and in turn, relating these to average storm intensities. For all soil types, we related maximum infiltration capacities to average storm intensities (k* and were able to show where model predictions of infiltration excess were most sensitive to rainfall resolution (ln k*=0.4 and where using time averaged rainfall data can lead to an under prediction of infiltration excess and an over prediction of the amount of water entering the soil (ln k*>2 for all three rainfall locations tested. For soils susceptible to both infiltration excess and saturation excess, total runoff sensitivity was scaled by relating drainage coefficients to average storm intensities (g* and parameter ranges where predicted runoff was dominated by infiltration excess or saturation excess depending on the resolution of rainfall data were determined (ln g*<2. Infiltration excess predicted from high resolution rainfall was short and intense, whereas saturation excess produced from low resolution rainfall was more constant and less intense. This has important implications for the accuracy of current hydrological models that use time

  1. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  2. From Point Clouds to Definitions of Architectural Space

    DEFF Research Database (Denmark)

    Tamke, Martin; Blümel, Ina; Ochmann, Sebastian

    2014-01-01

    Regarding interior building topology as an important aspect in building design and management, several approaches to indoor point cloud structuring have been introduced recently. Apart from a high-level semantic segmentation of the formerly unstructured point clouds into stories and rooms...... possible applications of these approaches in architectural design and building management and comment on the possible benefits for the building profession. While contemporary practice of spatial arrangement is predominantly based on the manual iteration of spatial topologies, we show that the segmentation...

  3. Georeferenced Point Clouds: A Survey of Features and Point Cloud Management

    Directory of Open Access Journals (Sweden)

    Johannes Otepka

    2013-10-01

    Full Text Available This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.

  4. Second-order analysis of inhomogeneous spatial point processes with proportional intensity functions

    DEFF Research Database (Denmark)

    Guan, Yongtao; Waagepetersen, Rasmus; Beale, Colin M.

    2008-01-01

    of the intensity functions. The first approach is based on nonparametric kernel-smoothing, whereas the second approach uses a conditional likelihood estimation approach to fit a parametric model for the pair correlation function. A great advantage of the proposed methods is that they do not require the often...... to two spatial point patterns regarding the spatial distributions of birds in the U.K.'s Peak District in 1990 and 2004....

  5. Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2010-02-01

    We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.

  6. High spatial resolution mapping of folds and fractures using Unmanned Aerial Vehicle (UAV) photogrammetry

    Science.gov (United States)

    Cruden, A. R.; Vollgger, S.

    2016-12-01

    The emerging capability of UAV photogrammetry combines a simple and cost-effective method to acquire digital aerial images with advanced computer vision algorithms that compute spatial datasets from a sequence of overlapping digital photographs from various viewpoints. Depending on flight altitude and camera setup, sub-centimeter spatial resolution orthophotographs and textured dense point clouds can be achieved. Orientation data can be collected for detailed structural analysis by digitally mapping such high-resolution spatial datasets in a fraction of time and with higher fidelity compared to traditional mapping techniques. Here we describe a photogrammetric workflow applied to a structural study of folds and fractures within alternating layers of sandstone and mudstone at a coastal outcrop in SE Australia. We surveyed this location using a downward looking digital camera mounted on commercially available multi-rotor UAV that autonomously followed waypoints at a set altitude and speed to ensure sufficient image overlap, minimum motion blur and an appropriate resolution. The use of surveyed ground control points allowed us to produce a geo-referenced 3D point cloud and an orthophotograph from hundreds of digital images at a spatial resolution automatically extracted from these high-resolution datasets using open-source software. This resulted in an extensive and statistically relevant orientation dataset that was used to 1) interpret the progressive development of folds and faults in the region, and 2) to generate a 3D structural model that underlines the complex internal structure of the outcrop and quantifies spatial variations in fold geometries. Overall, our work highlights how UAV photogrammetry can contribute to new insights in structural analysis.

  7. The automaticity of vantage point shifts within a synaesthetes' spatial calendar.

    Science.gov (United States)

    Jarick, Michelle; Jensen, Candice; Dixon, Michael J; Smilek, Daniel

    2011-09-01

    Time-space synaesthetes report that time units (e.g., months, days, hours) occupy idiosyncratic spatial locations. For the synaesthete (L), the months of the year are projected out in external space in the shape of a 'scoreboard 7', where January to July extend across the top from left to right and August to December make up the vertical segment from top to bottom. Interestingly, L can change the mental vantage point (MVP) from where she views her month-space depending on whether she sees or hears the month name. We used a spatial cueing task to demonstrate that L's attention could be directed to locations within her time-space and change vantage points automatically - from trial to trial. We also sought to eliminate any influence of strategy on L's performance by shortening the interval between the cue and target onset to only 150 ms, and have the targets fall in synaesthetically cued locations on only 15% of trials. If L's performance was attributable to intentionally using the cue to predict target location, these manipulations should eliminate any cueing effects. In two separate experiments, we found that L still showed an attentional bias consistent with her synaesthesia. Thus, we attribute L's rapid and resilient cueing effects to the automaticity of her spatial forms. ©2011 The British Psychological Society.

  8. Genetic interaction analysis of point mutations enables interrogation of gene function at a residue-level resolution

    Science.gov (United States)

    Braberg, Hannes; Moehle, Erica A.; Shales, Michael; Guthrie, Christine; Krogan, Nevan J.

    2014-01-01

    We have achieved a residue-level resolution of genetic interaction mapping – a technique that measures how the function of one gene is affected by the alteration of a second gene – by analyzing point mutations. Here, we describe how to interpret point mutant genetic interactions, and outline key applications for the approach, including interrogation of protein interaction interfaces and active sites, and examination of post-translational modifications. Genetic interaction analysis has proven effective for characterizing cellular processes; however, to date, systematic high-throughput genetic interaction screens have relied on gene deletions or knockdowns, which limits the resolution of gene function analysis and poses problems for multifunctional genes. Our point mutant approach addresses these issues, and further provides a tool for in vivo structure-function analysis that complements traditional biophysical methods. We also discuss the potential for genetic interaction mapping of point mutations in human cells and its application to personalized medicine. PMID:24842270

  9. Focusing: coming to the point in metamaterials

    Science.gov (United States)

    Guenneau, S.; Diatta, A.; McPhedran, R. C.

    2010-04-01

    This paper reviews some properties of lenses in curved and folded optical spaces. The point of the paper is to show some limitations of geometrical optics in the analysis of subwavelength focusing. We first provide a comprehensive derivation for the equation of geodesics in curved optical spaces, which is a tool of choice to design metamaterials in transformation optics. We then analyse the resolution of the image of a line source radiating in the Maxwell fisheye and the Veselago-Pendry slab lens. The former optical medium is deduced from the stereographic projection of a virtual sphere and displays a heterogeneous refractive index n(r) which is proportional to the inverse of 1 + r 2. The latter is described by a homogeneous, but negative, refractive index. It has been suggested that the fisheye makes a perfect lens without negative refraction [Leonhardt, Philbin arxiv:0805.4778v2]. However, we point out that the definition of super-resolution in such a heterogeneous medium should be computed with respect to the wavelength in a homogenised medium, and it is perhaps more adequate to talk about a conjugate image rather than a perfect image (the former does not necessarily contain the evanescent components of the source). We numerically find that both the Maxwell fisheye and a thick silver slab lens lead to a resolution close to λ/3 in transverse magnetic polarisation (electric field pointing orthogonal to the plane). We note a shift of the image plane in the latter lens. We also observe that two sources lead to multiple secondary images in the former lens, as confirmed from light rays travelling along geodesics of the virtual sphere. We further observe resolutions ranging from λ/2 to nearly λ/4 for magnetic dipoles of varying orientations of dipole moments within the fisheye in transverse electric polarisation (magnetic field pointing orthogonal to the plane). Finally, we analyse the Eaton lens for which the source and its image are either located within a unit

  10. Fast covariance estimation for innovations computed from a spatial Gibbs point process

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Rubak, Ege

    In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...

  11. Spatial resolution in visual memory.

    Science.gov (United States)

    Ben-Shalom, Asaf; Ganel, Tzvi

    2015-04-01

    Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.

  12. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-02-01

    Full Text Available To solve the problem on inaccuracy when estimating the point spread function (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  13. Recent progress in fission at saddle point and scission point

    International Nuclear Information System (INIS)

    Blons, J.; Paya, D.; Signarbieux, C.

    High resolution measurements of 230 Th and 232 Th fission cross sections for neutrons exhibit a fine structure. Such a structure is interpreted as a superposition of two rotational bands in the third, asymmetric, well of the fission barrier. The fragment mass distribution in the thermal fission of 235 U and 233 U does not show any even-odd effect, even at the highest kinetic energies. This is the mark of a strong viscosity in the descent from saddle point to scission point [fr

  14. X-ray diffractometry with spatial resolution

    International Nuclear Information System (INIS)

    Zeiner, K.

    1981-04-01

    X-ray diffractometry is one of the extensively used methods for investigation of the crystalline structure of materials. Line shape and position of a diffracted line are influenced by grain size, deformation and stress. Spatial resolution of one of these specimen characteristics is usually achieved by point-focused X-ray beams and subsequently analyzing different specimen positions. This work uses the method of image reconstruction from projections for the generation of distribution maps. Additional experimental requirements when using a conventional X-ray goniometer are a specimen scanning unit and a computer. The scanning unit repeatedly performs a number of translation steps followed by a rotation step in a fixed X-ray tube/detector (position sensitive detector) arrangement. At each specimen position a diffraction line is recorded using a line-shaped X-ray beam. This network of diffraction lines (showing line resolution) is mathematically converted to a distribution map of diffraction lines and going thus a point resolution. Specimen areas of up to several cm 2 may be analyzed with a linear resolution of 0.1 to 1 mm. Image reconstruction from projections must be modified for generation of ''function-maps''. This theory is discussed and demonstrated by computer simulations. Diffraction line analysis is done for specimen deformation using a deconvolution procedure. The theoretical considerations are experimentally verified. (author)

  15. Spatial resolution of the HRRT PET scanner using 3D-OSEM PSF reconstruction

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Sibomana, Merence; Keller, Sune Høgild

    2009-01-01

    The spatial resolution of the Siemens High Resolution Research Tomograph (HRRT) dedicated brain PET scanner installed at Copenhagen University Hospital (Rigshospitalet) was measured using a point-source phantom with high statistics. Further, it was demonstrated how the newly developed 3D-OSEM PSF...

  16. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  17. Investigating Spatial Patterns of Persistent Scatterer Interferometry Point Targets and Landslide Occurrences in the Arno River Basin

    Directory of Open Access Journals (Sweden)

    Ping Lu

    2014-07-01

    Full Text Available Persistent Scatterer Interferometry (PSI has been widely used for landslide studies in recent years. This paper investigated the spatial patterns of PSI point targets and landslide occurrences in the Arno River basin in Central Italy. The main purpose is to analyze whether spatial patterns of Persistent Scatterers (PS can be recognized as indicators of landslide occurrences throughout the whole basin. The bivariate K-function was employed to assess spatial relationships between PS and landslides. The PSI point targets were acquired from almost 4 years (from March 2003 to January 2007 of RADARSAT-1 images. The landslide inventory was collected from 15 years (from 1992–2007 of surveying and mapping data, mainly including remote sensing data, topographic maps and field investigations. The proposed approach is able to assess spatial patterns between a variety of PS and landslides, in particular, to understand if PSI point targets are spatially clustered (spatial attraction or randomly distributed (spatial independency on various types of landslides across the basin. Additionally, the degree and scale distances of PS clustering on a variety of landslides can be characterized. The results rejected the null hypothesis that PSI point targets appear to cluster similarly on four types of landslides (slides, flows, falls and creeps in the Arno River basin. Significant influence of PS velocities and acquisition orbits can be noticed on detecting landslides with different states of activities. Despite that the assessment may be influenced by the quality of landslide inventory and Synthetic Aperture Radar (SAR images, the proposed approach is expected to provide guidelines for studies trying to detect and investigate landslide occurrences at a regional scale through spatial statistical analysis of PS, for which an advanced understanding of the impact of scale distances on landslide clustering is fundamentally needed.

  18. Structured perceptual input imposes an egocentric frame of reference-pointing, imagery, and spatial self-consciousness.

    Science.gov (United States)

    Marcel, Anthony; Dobel, Christian

    2005-01-01

    Perceptual input imposes and maintains an egocentric frame of reference, which enables orientation. When blindfolded, people tended to mistake the assumed intrinsic axes of symmetry of their immediate environment (a room) for their own egocentric relation to features of the room. When asked to point to the door and window, known to be at mid-points of facing (or adjacent) walls, they pointed with their arms at 180 degrees (or 90 degrees) angles, irrespective of where they thought they were in the room. People did the same when requested to imagine the situation. They justified their responses (inappropriately) by logical necessity or a structural description of the room rather than (appropriately) by relative location of themselves and the reference points. In eight experiments, we explored the effect on this in perception and imagery of: perceptual input (without perceptibility of the target reference points); imaging oneself versus another person; aids to explicit spatial self-consciousness; order of questions about self-location; and the relation of targets to the axes of symmetry of the room. The results indicate that, if one is deprived of structured perceptual input, as well as losing one's bearings, (a) one is likely to lose one's egocentric frame of reference itself, and (b) instead of pointing to reference points, one demonstrates their structural relation by adopting the intrinsic axes of the environment as one's own. This is prevented by providing noninformative perceptual input or by inducing subjects to imagine themselves from the outside, which makes explicit the fact of their being located relative to the world. The role of perceptual contact with a structured world is discussed in relation to sensory deprivation and imagery, appeal is made to Gibson's theory of joint egoreception and exteroception, and the data are related to recent theories of spatial memory and navigation.

  19. Testing Local Independence between Two Point Processes

    DEFF Research Database (Denmark)

    Allard, Denis; Brix, Anders; Chadæuf, Joël

    2001-01-01

    Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...

  20. Point-to-point people with purpose—Exploring the possibility of a commercial traveler market for point-to-point suborbital space transportation

    Science.gov (United States)

    Webber, Derek

    2013-12-01

    An argument was made at the First Arcachon Conference on Private Human Access to Space in 2008 [1] that some systematic market research should be conducted into potential market segments for point-to-point suborbital space transportation (PtP), in order to understand whether a commercial market exists which might augment possible government use for such a vehicle. The cargo market potential was subsequently addressed via desk research, and the results, which resulted in a pessimistic business case outlook, were presented in [2]. The same desk research approach is now used in this paper to address the potential business and wealthy individual passenger traveler market segment ("point-to-point people with purpose"). The results, with the assumed ticket pricing, are not encouraging.

  1. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    Science.gov (United States)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  2. May 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from Dana Point to Point La...

  3. September 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from Dana Point to Point La...

  4. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Science.gov (United States)

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  5. Sub-spatial resolution position estimation for optical fibre sensing applications

    DEFF Research Database (Denmark)

    Zibar, Darko; Werzinger, Stefan; Schmauss, Bernhard

    2017-01-01

    Methods from machine learning community are employed for estimating the position of fibre Bragg gratings in an array. Using the conventional methods for position estimation, based on inverse discrete Fourier transform (IDFT), it is required that two-point spatial resolution is less than gratings...... of reflection coefficients and the positions is performed. From the practical point of view, we can demonstrate the reduction of the interrogator's bandwidth by factor of 2. The technique is demonstrated for incoherent optical frequency domain reflectometry (IOFDR). However, the approach is applicable to any...

  6. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    Science.gov (United States)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  7. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source

    DEFF Research Database (Denmark)

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten

    2015-01-01

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial...... the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use....

  8. Self-Similar Spin Images for Point Cloud Matching

    Science.gov (United States)

    Pulido, Daniel

    based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  9. 52 Million Points and Counting: A New Stratification Approach for Mapping Global Marine Ecosystems

    Science.gov (United States)

    Wright, D. J.; Sayre, R.; Breyer, S.; Butler, K. A.; VanGraafeiland, K.; Goodin, K.; Kavanaugh, M.; Costello, M. J.; Cressie, N.; Basher, Z.; Harris, P. T.; Guinotte, J. M.

    2016-12-01

    We report progress on the Ecological Marine Units (EMU) project, a new undertaking commissioned by the Group on Earth Observations (GEO) as a means of developing a standardized and practical global ecosystems classification and map for the oceans, and thus a key outcome of the GEO Biodiversity Observation Network (GEO BON). The project is one of four components of the new GI-14 GEO Ecosystems Initiative within the GEO 2016 Transitional Work plan, and for eventual use by the Global Earth Observation System of Systems (GEOSS). The project is also the follow-on to a comprehensive Ecological Land Units project (ELU), also commissioned by GEO. The EMU is comprised of a global point mesh framework, created from 52,487,233 points from the NOAA World Ocean Atlas; spatial resolution is ¼° by ¼° by varying depth; temporal resolution is currently decadal; each point has x, y, z, as well as six attributes of chemical and physical oceanographic structure (temperature, salinity, dissolved oxygen, nitrate, silicate, phosphate) that are likely drivers of many ecosystem responses. We implemented a k-means statistical clustering of the point mesh (using the pseudo-F statistic to help determine the numbers of clusters), allowing us to identify and map 37 environmentally distinct 3D regions (candidate `ecosystems') within the water column. These units can be attributed according to their productivity, direction and velocity of currents, species abundance, global seafloor geomorphology (from Harris et al.), and much more. A series of data products for open access will share the 3D point mesh and EMU clusters at the surface, bottom, and within the water column, as well as 2D and 3D web apps for exploration of the EMUs and the original World Ocean Atlas data. Future plans include a global delineation of Ecological Coastal Units (ECU) at a much finer spatial resolution (not yet commenced), as well as global ecological freshwater ecosystems (EFUs; in earliest planning stages). We will

  10. Spatial Stochastic Point Models for Reservoir Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Syversveen, Anne Randi

    1997-12-31

    The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.

  11. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  12. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  13. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  14. Detecting determinism from point processes.

    Science.gov (United States)

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  15. Defect production due to quenching through a multicritical point

    International Nuclear Information System (INIS)

    Divakaran, Uma; Mukherjee, Victor; Dutta, Amit; Sen, Diptiman

    2009-01-01

    We study the generation of defects when a quantum spin system is quenched through a multicritical point by changing a parameter of the Hamiltonian as t/τ, where τ is the characteristic timescale of quenching. We argue that when a quantum system is quenched across a multicritical point, the density of defects (n) in the final state is not necessarily given by the Kibble–Zurek scaling form n∼1/τ dν/(zν+1) , where d is the spatial dimension, and ν and z are respectively the correlation length and dynamical exponent associated with the quantum critical point. We propose a generalized scaling form of the defect density given by n∼1/τ d/(2z 2 ) , where the exponent z 2 determines the behavior of the off-diagonal term of the 2 × 2 Landau–Zener matrix at the multicritical point. This scaling is valid not only at a multicritical point but also at an ordinary critical point

  16. Measurement of gamma quantum interaction point in plastic scintillator with WLS strips

    Energy Technology Data Exchange (ETDEWEB)

    Smyrski, J., E-mail: smyrski@if.uj.edu.pl [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, S. Łojasiewicza 11, 30-348 Cracow (Poland); Alfs, D.; Bednarski, T.; Białas, P.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gupta-Sharma, N. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, S. Łojasiewicza 11, 30-348 Cracow (Poland); Gorgol, M.; Jasińska, B. [Department of Nuclear Methods, Institute of Physics, Maria Curie-Sklodowska University, 20-031 Lublin (Poland); Kajetanowicz, M.; Kamińska, D.; Korcyl, G. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, S. Łojasiewicza 11, 30-348 Cracow (Poland); Kowalski, P. [Świerk Computing Centre, National Centre for Nuclear Research, 05-400 Otwock-Świerk (Poland); Krzemień, W. [High Energy Department, National Centre for Nuclear Research, 05-400 Otwock-Świerk (Poland); Krawczyk, N.; Kubicz, E.; Mohammed, M.; Niedźwiecki, Sz. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, S. Łojasiewicza 11, 30-348 Cracow (Poland); and others

    2017-04-11

    The feasibility of measuring the aśxial coordinate of a gamma quantum interaction point in a plastic scintillator bar via the detection of scintillation photons escaping from the scintillator with an array of wavelength-shifting (WLS) strips is demonstrated. Using a test set-up comprising a BC-420 scintillator bar and an array of sixteen BC-482A WLS strips we achieved a spatial resolution of 5 mm (σ) for annihilation photons from a {sup 22}Na isotope. The studied method can be used to improve the spatial resolution of a plastic-scintillator-based PET scanner which is being developed by the J-PET collaboration.

  17. INTERSECTION DETECTION BASED ON QUALITATIVE SPATIAL REASONING ON STOPPING POINT CLUSTERS

    Directory of Open Access Journals (Sweden)

    S. Zourlidou

    2016-06-01

    Full Text Available The purpose of this research is to propose and test a method for detecting intersections by analysing collectively acquired trajectories of moving vehicles. Instead of solely relying on the geometric features of the trajectories, such as heading changes, which may indicate turning points and consequently intersections, we extract semantic features of the trajectories in form of sequences of stops and moves. Under this spatiotemporal prism, the extracted semantic information which indicates where vehicles stop can reveal important locations, such as junctions. The advantage of the proposed approach in comparison with existing turning-points oriented approaches is that it can detect intersections even when not all the crossing road segments are sampled and therefore no turning points are observed in the trajectories. The challenge with this approach is that first of all, not all vehicles stop at the same location – thus, the stop-location is blurred along the direction of the road; this, secondly, leads to the effect that nearby junctions can induce similar stop-locations. As a first step, a density-based clustering is applied on the layer of stop observations and clusters of stop events are found. Representative points of the clusters are determined (one per cluster and in a last step the existence of an intersection is clarified based on spatial relational cluster reasoning, with which less informative geospatial clusters, in terms of whether a junction exists and where its centre lies, are transformed in more informative ones. Relational reasoning criteria, based on the relative orientation of the clusters with their adjacent ones are discussed for making sense of the relation that connects them, and finally for forming groups of stop events that belong to the same junction.

  18. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  19. Model for Semantically Rich Point Cloud Data

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  20. MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    F. Poux

    2017-10-01

    Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  1. Flow area optimization in point to area or area to point flows

    International Nuclear Information System (INIS)

    Ghodoossi, Lotfollah; Egrican, Niluefer

    2003-01-01

    This paper deals with the constructal theory of generation of shape and structure in flow systems connecting one point to a finite size area. The flow direction may be either from the point to the area or the area to the point. The formulation of the problem remains the same if the flow direction is reversed. Two models are used in optimization of the point to area or area to point flow problem: cost minimization and revenue maximization. The cost minimization model enables one to predict the shape of the optimized flow areas, but the geometric sizes of the flow areas are not predictable. That is, as an example, if the area of flow is a rectangle with a fixed area size, optimization of the point to area or area to point flow problem by using the cost minimization model will only predict the height/length ratio of the rectangle not the height and length itself. By using the revenue maximization model in optimization of the flow problems, all optimized geometric aspects of the interested flow areas will be derived as well. The aim of this paper is to optimize the point to area or area to point flow problems in various elemental flow area shapes and various structures of the flow system (various combinations of elemental flow areas) by using the revenue maximization model. The elemental flow area shapes used in this paper are either rectangular or triangular. The forms of the flow area structure, made up of an assembly of optimized elemental flow areas to obtain bigger flow areas, are rectangle-in-rectangle, rectangle-in-triangle, triangle-in-triangle and triangle-in-rectangle. The global maximum revenue, revenue collected per unit flow area and the shape and sizes of each flow area structure have been derived in optimized conditions. The results for each flow area structure have been compared with the results of the other structures to determine the structure that provides better performance. The conclusion is that the rectangle-in-triangle flow area structure

  2. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  3. High-spatial-resolution electron density measurement by Langmuir probe for multi-point observations using tiny spacecraft

    Science.gov (United States)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Trondsen, E.; Clausen, L. B. N.; Miloch, W. J.; Moen, J. I.

    2017-11-01

    A method for evaluating electron density using a single fixed-bias Langmuir probe is presented. The technique allows for high-spatio-temporal resolution electron density measurements, which can be effectively carried out by tiny spacecraft for multi-point observations in the ionosphere. The results are compared with the multi-needle Langmuir probe system, which is a scientific instrument developed at the University of Oslo comprising four fixed-bias cylindrical probes that allow small-scale plasma density structures to be characterized in the ionosphere. The technique proposed in this paper can comply with the requirements of future small-sized spacecraft, where the cost-effectiveness, limited space available on the craft, low power consumption and capacity for data-links need to be addressed. The first experimental results in both the plasma laboratory and space confirm the efficiency of the new approach. Moreover, detailed analyses on two challenging issues when deploying the DC Langmuir probe on a tiny spacecraft, which are the limited conductive area of the spacecraft and probe surface contamination, are presented in the paper. It is demonstrated that the limited conductive area, depending on applications, can either be of no concern for the experiment or can be resolved by mitigation methods. Surface contamination has a small impact on the performance of the developed probe.

  4. Coding and decoding in a point-to-point communication using the polarization of the light beam.

    Science.gov (United States)

    Kavehvash, Z; Massoumian, F

    2008-05-10

    A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.

  5. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  6. Spatial Attention Is Attracted in a Sustained Fashion toward Singular Points in the Optic Flow

    Science.gov (United States)

    Wang, Shuo; Fukuchi, Masaki; Koch, Christof; Tsuchiya, Naotsugu

    2012-01-01

    While a single approaching object is known to attract spatial attention, it is unknown how attention is directed when the background looms towards the observer as s/he moves forward in a quasi-stationary environment. In Experiment 1, we used a cued speeded discrimination task to quantify where and how spatial attention is directed towards the target superimposed onto a cloud of moving dots. We found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The effects were less pronounced when the motion was contractive. The more ecologically valid the motion features became (e.g., temporal expansion of each dot, spatial depth structure implied by distribution of the size of the dots), the stronger the attentional effects. Further, the attentional effects were sustained over 1000 ms. Experiment 2 quantified these attentional effects using a change detection paradigm by zooming into or out of photographs of natural scenes. Spatial attention was attracted in a sustained manner such that change detection was facilitated or delayed depending on the location of the FOE only when the motion was expansive. Our results suggest that focal attention is strongly attracted towards singular points that signal the direction of forward ego-motion. PMID:22905096

  7. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  8. Evidence of Territoriality and Species Interactions from Spatial Point-Pattern Analyses of Subarctic-Nesting Geese

    Science.gov (United States)

    Reiter, Matthew E.; Andersen, David E.

    2013-01-01

    Quantifying spatial patterns of bird nests and nest fate provides insights into processes influencing a species’ distribution. At Cape Churchill, Manitoba, Canada, recent declines in breeding Eastern Prairie Population Canada geese (Branta canadensis interior) has coincided with increasing populations of nesting lesser snow geese (Chen caerulescens caerulescens) and Ross’s geese (Chen rossii). We conducted a spatial analysis of point patterns using Canada goose nest locations and nest fate, and lesser snow goose nest locations at two study areas in northern Manitoba with different densities and temporal durations of sympatric nesting Canada and lesser snow geese. Specifically, we assessed (1) whether Canada geese exhibited territoriality and at what scale and nest density; and (2) whether spatial patterns of Canada goose nest fate were associated with the density of nesting lesser snow geese as predicted by the protective-association hypothesis. Between 2001 and 2007, our data suggest that Canada geese were territorial at the scale of nearest neighbors, but were aggregated when considering overall density of conspecifics at slightly broader spatial scales. The spatial distribution of nest fates indicated that lesser snow goose nest proximity and density likely influence Canada goose nest fate. Our analyses of spatial point patterns suggested that continued changes in the distribution and abundance of breeding lesser snow geese on the Hudson Bay Lowlands may have impacts on the reproductive performance of Canada geese, and subsequently the spatial distribution of Canada goose nests. PMID:24312520

  9. Self-exciting point process in modeling earthquake occurrences

    International Nuclear Information System (INIS)

    Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.

    2017-01-01

    In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)

  10. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    Science.gov (United States)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    LiDAR, also referred to as laser scanning, has proved to be an important tool for topographic data acquisition. Terrestrial laser scanning allows for accurate (several millimeter) and high resolution (several centimeter) data acquisition at distances of up to some hundred meters. By contrast, airborne laser scanning allows for acquiring homogeneous data for large areas, albeit with lower accuracy (decimeter) and resolution (some ten points per square meter) compared to terrestrial laser scanning. Hence, terrestrial laser scanning is preferably used for precise data acquisition of limited areas such as landslides or steep structures, while airborne laser scanning is well suited for the acquisition of topographic data of huge areas or even country wide. Laser scanners acquire more or less homogeneously distributed point clouds. These points represent natural objects like terrain and vegetation and artificial objects like buildings, streets or power lines. Typical products derived from such data are geometric models such as digital surface models representing all natural and artificial objects and digital terrain models representing the geomorphic topography only. As the LiDAR technology evolves, the amount of data produced increases almost exponentially even in smaller projects. This means a considerable challenge for the end user of the data: the experimenter has to have enough knowledge, experience and computer capacity in order to manage the acquired dataset and to derive geomorphologically relevant information from the raw or intermediate data products. Additionally, all this information might need to be integrated with other data like orthophotos. In all theses cases, in general, interactive interpretation is necessary to determine geomorphic structures from such models to achieve effective data reduction. There is little support for the automatic determination of characteristic features and their statistical evaluation. From the lessons learnt from automated

  11. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  12. State estimation for temporal point processes

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2015-01-01

    This paper is concerned with combined inference for point processes on the real line observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point processes. For a range of models, the marginal and

  13. Finding a single point of truth

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, S.; Thijssen, H. [Autodesk Inc, Toronto, ON (Canada); Laslo, D.; Martin, J. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Electric utilities collect large volumes of data at every level of their business, including SCADA, Smart Metering and Smart Grid initiatives, LIDAR and other 3D imagery surveys. Different types of database systems are used to store the information, rendering data flow within the utility business process extremely complicated. The industry trend has been to endure redundancy of data input and maintenance of multiple copies of the same data across different solution data sets. Efforts have been made to improve the situation with point to point interfaces, but with the tools and solutions available today, a single point of truth can be achieved. Consolidated and validated data can be published into a data warehouse at the right point in the process, making the information available to all other enterprise systems and solutions. This paper explained how the single point of truth spatial data warehouse and process automation services can be configured to streamline the flow of data within the utility business process using the initiate-plan-execute-close (IPEC) utility workflow model. The paper first discussed geospatial challenges faced by utilities and then presented the approach and technology aspects. It was concluded that adoption of systems and solutions that can function with and be controlled by the IPEC workflow can provide significant improvement for utility operations, particularly if those systems are coupled with the spatial data warehouse that reflects a single point of truth. 6 refs., 3 figs.

  14. Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses

    Science.gov (United States)

    Murphy, Christian E.

    2018-05-01

    Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.

  15. Landform classification using a sub-pixel spatial attraction model to increase spatial resolution of digital elevation model (DEM

    Directory of Open Access Journals (Sweden)

    Marzieh Mokarrama

    2018-04-01

    Full Text Available The purpose of the present study is preparing a landform classification by using digital elevation model (DEM which has a high spatial resolution. To reach the mentioned aim, a sub-pixel spatial attraction model was used as a novel method for preparing DEM with a high spatial resolution in the north of Darab, Fars province, Iran. The sub-pixel attraction models convert the pixel into sub-pixels based on the neighboring pixels fraction values, which can only be attracted by a central pixel. Based on this approach, a mere maximum of eight neighboring pixels can be selected for calculating of the attraction value. In the mentioned model, other pixels are supposed to be far from the central pixel to receive any attraction. In the present study by using a sub-pixel attraction model, the spatial resolution of a DEM was increased. The design of the algorithm is accomplished by using a DEM with a spatial resolution of 30 m (the Advanced Space borne Thermal Emission and Reflection Radiometer; (ASTER and a 90 m (the Shuttle Radar Topography Mission; (SRTM. In the attraction model, scale factors of (S = 2, S = 3, and S = 4 with two neighboring methods of touching (T = 1 and quadrant (T = 2 are applied to the DEMs by using MATLAB software. The algorithm is evaluated by taking the best advantages of 487 sample points, which are measured by surveyors. The spatial attraction model with scale factor of (S = 2 gives better results compared to those scale factors which are greater than 2. Besides, the touching neighborhood method is turned to be more accurate than the quadrant method. In fact, dividing each pixel into more than two sub-pixels decreases the accuracy of the resulted DEM. On the other hand, in these cases DEM, is itself in charge of increasing the value of root-mean-square error (RMSE and shows that attraction models could not be used for S which is greater than 2. Thus considering results, the proposed model is highly capable of

  16. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  17. Comparative study of size dependent four-point probe sheet resistance measurement on laser annealed ultra-shallow junctions

    DEFF Research Database (Denmark)

    Petersen, Dirch Hjorth; Lin, Rong; Hansen, Torben Mikael

    2008-01-01

    have been used to characterize the sheet resistance uniformity of millisecond laser annealed USJs. They verify, both experimentally and theoretically, that the probe pitch of a four-point probe can strongly affect the measured sheet resistance. Such effect arises from the sensitivity (or "spot size......In this comparative study, the authors demonstrate the relationship/correlation between macroscopic and microscopic four-point sheet resistance measurements on laser annealed ultra-shallow junctions (USJs). Microfabricated cantilever four-point probes with probe pitch ranging from 1.5 to 500 mu m......") of an in-line four-point probe. Their study shows the benefit of the spatial resolution of the micro four-point probe technique to characterize stitching effects resulting from the laser annealing process....

  18. Multi-scale Clustering of Points Synthetically Considering Lines and Polygons Distribution

    Directory of Open Access Journals (Sweden)

    YU Li

    2015-10-01

    Full Text Available Considering the complexity and discontinuity of spatial data distribution, a clustering algorithm of points was proposed. To accurately identify and express the spatial correlation among points,lines and polygons, a Voronoi diagram that is generated by all spatial features is introduced. According to the distribution characteristics of point's position, an area threshold used to control clustering granularity was calculated. Meanwhile, judging scale convergence by constant area threshold, the algorithm classifies spatial features based on multi-scale, with an O(n log n running time.Results indicate that spatial scale converges self-adaptively according with distribution of points.Without the custom parameters, the algorithm capable to discover arbitrary shape clusters which be bound by lines and polygons, and is robust for outliers.

  19. Preparation of very small point sources for high resolution radiography

    International Nuclear Information System (INIS)

    Case, F.N.

    1976-01-01

    The need for very small point sources of high specific activity 192 Ir, 169 Yb, 170 Tm, and 60 Co in non-destructive testing has motivated the development of techniques for the fabrication of these sources. To prepare 192 Ir point sources for use in examination of tube sheet welds in LMFBR heat exchangers, 191 Ir enriched to greater than 90 percent was melted in a helium blanketed arc to form spheres as small as 0.38 mm in diameter. Methods were developed to form the roughly spherical shaped arc product into nearly symmetrical spheres that could be used for high resolution radiography. Similar methods were used for spherical shaped sources of 169 Yb and 170 Tm. The oxides were arc melted to form rough spheres followed by grinding to precise dimensions, neutron irradiation of the spheres at a flux of 2 to 3 x 10 15 nv, and use of enriched 168 Yb to provide the maximum specific activity. Cobalt-60 with a specific activity of greater than 1100 Ci/g was prepared by processing 59 Co that had been neutron irradiated to nearly complete burnup of the 59 Co target to produce 60 Co, 61 Ni, and 62 Ni. Ion exchange methods were used to separate the cobalt from the nickel. The cobalt was reduced to metal by plating either onto aluminum foil which was dissolved away from the cobalt plate, or by plating onto mercury to prepare amalgam that could be easily formed into a pellet of cobalt with exclusion of the mercury. Both methods are discussed

  20. Spatial determination of magnetic avalanche ignition points

    International Nuclear Information System (INIS)

    Jaafar, Reem; McHugh, S.; Suzuki, Yoko; Sarachik, M.P.; Myasoedov, Y.; Zeldov, E.; Shtrikman, H.; Bagai, R.; Christou, G.

    2008-01-01

    Using time-resolved measurements of local magnetization in the molecular magnet Mn 12 -ac, we report studies of magnetic avalanches (fast magnetization reversals) with non-planar propagating fronts, where the curved nature of the magnetic fronts is reflected in the time-of-arrival at micro-Hall sensors placed at the surface of the sample. Assuming that the avalanche interface is a spherical bubble that grows with a radius proportional to time, we are able to locate the approximate ignition point of each avalanche in a two-dimensional cross-section of the crystal. We find that although in most samples the avalanches ignite at the long ends, as found in earlier studies, there are crystals in which ignition points are distributed throughout an entire weak region near the center, with a few avalanches still originating at the ends

  1. Spatial determination of magnetic avalanche ignition points

    Energy Technology Data Exchange (ETDEWEB)

    Jaafar, Reem; McHugh, S.; Suzuki, Yoko [Physics Department, City College of the City University of New York, New York, NY 10031 (United States); Sarachik, M.P. [Physics Department, City College of the City University of New York, New York, NY 10031 (United States)], E-mail: sarachik@sci.ccny.cuny.edu; Myasoedov, Y.; Zeldov, E.; Shtrikman, H. [Department Condensed Matter Physics, Weizmann Institute of Science, Rehovot 76100 (Israel); Bagai, R.; Christou, G. [Department of Chemistry, University of Florida, Gainesville, FL 32611 (United States)

    2008-03-15

    Using time-resolved measurements of local magnetization in the molecular magnet Mn{sub 12}-ac, we report studies of magnetic avalanches (fast magnetization reversals) with non-planar propagating fronts, where the curved nature of the magnetic fronts is reflected in the time-of-arrival at micro-Hall sensors placed at the surface of the sample. Assuming that the avalanche interface is a spherical bubble that grows with a radius proportional to time, we are able to locate the approximate ignition point of each avalanche in a two-dimensional cross-section of the crystal. We find that although in most samples the avalanches ignite at the long ends, as found in earlier studies, there are crystals in which ignition points are distributed throughout an entire weak region near the center, with a few avalanches still originating at the ends.

  2. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali

    2014-01-01

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  3. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying

    2014-07-15

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  4. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    Science.gov (United States)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  5. Octopuses use a human-like strategy to control precise point-to-point arm movements.

    Science.gov (United States)

    Sumbre, Germán; Fiorito, Graziano; Flash, Tamar; Hochner, Binyamin

    2006-04-18

    One of the key problems in motor control is mastering or reducing the number of degrees of freedom (DOFs) through coordination. This problem is especially prominent with hyper-redundant limbs such as the extremely flexible arm of the octopus. Several strategies for simplifying these control problems have been suggested for human point-to-point arm movements. Despite the evolutionary gap and morphological differences, humans and octopuses evolved similar strategies when fetching food to the mouth. To achieve this precise point-to-point-task, octopus arms generate a quasi-articulated structure based on three dynamic joints. A rotational movement around these joints brings the object to the mouth . Here, we describe a peripheral neural mechanism-two waves of muscle activation propagate toward each other, and their collision point sets the medial-joint location. This is a remarkably simple mechanism for adjusting the length of the segments according to where the object is grasped. Furthermore, similar to certain human arm movements, kinematic invariants were observed at the joint level rather than at the end-effector level, suggesting intrinsic control coordination. The evolutionary convergence to similar geometrical and kinematic features suggests that a kinematically constrained articulated limb controlled at the level of joint space is the optimal solution for precise point-to-point movements.

  6. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    Science.gov (United States)

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Instantaneous local wave vector estimation from multi-spacecraft measurements using few spatial points

    Directory of Open Access Journals (Sweden)

    T. D. Carozzi

    2004-07-01

    Full Text Available We introduce a technique to determine instantaneous local properties of waves based on discrete-time sampled, real-valued measurements from 4 or more spatial points. The technique is a generalisation to the spatial domain of the notion of instantaneous frequency used in signal processing. The quantities derived by our technique are closely related to those used in geometrical optics, namely the local wave vector and instantaneous phase velocity. Thus, this experimental technique complements ray-tracing. We provide example applications of the technique to electric field and potential data from the EFW instrument on Cluster. Cluster is the first space mission for which direct determination of the full 3-dimensional local wave vector is possible, as described here.

  8. Strategies for lidar characterization of particulates from point and area sources

    Science.gov (United States)

    Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry

    2010-10-01

    Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.

  9. Triple-frequency GPS precise point positioning with rapid ambiguity resolution

    Science.gov (United States)

    Geng, Jianghui; Bock, Yehuda

    2013-05-01

    At present, reliable ambiguity resolution in real-time GPS precise point positioning (PPP) can only be achieved after an initial observation period of a few tens of minutes. In this study, we propose a method where the incoming triple-frequency GPS signals are exploited to enable rapid convergences to ambiguity-fixed solutions in real-time PPP. Specifically, extra-wide-lane ambiguity resolution can be first achieved almost instantaneously with the Melbourne-Wübbena combination observable on L2 and L5. Then the resultant unambiguous extra-wide-lane carrier-phase is combined with the wide-lane carrier-phase on L1 and L2 to form an ionosphere-free observable with a wavelength of about 3.4 m. Although the noise of this observable is around 100 times the raw carrier-phase noise, its wide-lane ambiguity can still be resolved very efficiently, and the resultant ambiguity-fixed observable can assist much better than pseudorange in speeding up succeeding narrow-lane ambiguity resolution. To validate this method, we use an advanced hardware simulator to generate triple-frequency signals and a high-grade receiver to collect 1-Hz data. When the carrier-phase precisions on L1, L2 and L5 are as poor as 1.5, 6.3 and 1.5 mm, respectively, wide-lane ambiguity resolution can still reach a correctness rate of over 99 % within 20 s. As a result, the correctness rate of narrow-lane ambiguity resolution achieves 99 % within 65 s, in contrast to only 64 % within 150 s in dual-frequency PPP. In addition, we also simulate a multipath-contaminated data set and introduce new ambiguities for all satellites every 120 s. We find that when multipath effects are strong, ambiguity-fixed solutions are achieved at 78 % of all epochs in triple-frequency PPP whilst almost no ambiguities are resolved in dual-frequency PPP. Therefore, we demonstrate that triple-frequency PPP has the potential to achieve ambiguity-fixed solutions within a few minutes, or even shorter if raw carrier-phase precisions are

  10. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  11. A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.

    Science.gov (United States)

    Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin

    2016-05-25

    Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.

  12. Characterization of MIPAS elevation pointing

    Directory of Open Access Journals (Sweden)

    M. Kiefer

    2007-01-01

    Full Text Available Sufficient knowledge of the pointing is essential for analyses of limb emission measurements. The scientific retrieval processor for MIPAS on ENVISAT operated at IMK allows the retrieval of pointing information in terms of tangent altitudes along with temperature. The retrieved tangent altitudes are independent of systematic offsets in the engineering Line-Of-Sight (LOS information delivered with the ESA Level 1b product. The difference of pointing retrieved from the reprocessed high resolution MIPAS spectra and the engineering pointing information was examined with respect to spatial/temporal behaviour. Among others the following characteristics of MIPAS pointing could be identified: Generally the engineering tangent altitudes are too high by 0–1.8 km with conspicuous variations in this range over time. Prior to December of 2003 there was a drift of about 50–100 m/h, which was due to a slow change in the satellite attitude. A correction of this attitude is done twice a day, which leads to discontinuities in the order of 1–1.5 km in the tangent altitudes. Occasionally discontinuities up to 2.5 km are found, as already reported from MIPAS and SCIAMACHY observations. After an update of the orbit position software in December 2003 values of drift and jumps are much reduced. There is a systematic difference in the mispointing between the poles which amounts to 1.5–2 km, i.e. there is a conspicuous orbit-periodic feature. The analysis of the correlation between the instrument's viewing angle azimuth and differential mispointing supports the hypotheses that a major part of this latter phenomenon can be attributed to an error in the roll angle of the satellite/instrument system of approximately 42 mdeg. One conclusion is that ESA level 2 data should be compared to other data exclusively on tangent pressure levels. Complementary to IMK data, ESA operational LOS calibration results were used to characterize MIPAS pointing. For this purpose

  13. From Contrapuntal Music to Polyphonic Novel: Aldous Huxley’s Point Counter Point

    Directory of Open Access Journals (Sweden)

    Mevlüde ZENGİN

    2015-06-01

    Full Text Available Taken at face value Point Counter Point (1928 written by Aldous Huxley seems to be a novel including many stories of various and sundry people and reflecting their points of view about the world in which they live and about the life they have been leading. However, it is this very quality of the novel that provides grounds for the study of the novel as a polyphonic one. The novel presents to its reader an aggregate of strikingly different characters and thus a broad spectrum of contemporary society. The characters in the novel are all characterized by and individualized with easily recognizable physical, intellectual, emotional, psychological and moral qualities. Each of them is well-contrived through their differences in social status, political views, wealth, etc. Thus, many different viewpoints, conflicting voices, contrasting insights and ideas are heard and seen synchronically in Point Counter Point, which makes it polyphonic. Polyphony is a musical motif referring to different notes and chords played at the same time to create a rhythm. It was first adopted by M. M. Bakhtin to analyze F. M. Dostoyevsky’s fiction. The aim of this study is firstly to elucidate, in Bakhtinian thought, polyphony and then dialogism and heteroglossia closely related to his concept of polyphony; and then to put the polyphonic qualities in Point Counter Point forth, studying the novel’s dialogism and heteroglot qualities

  14. Digital microwave communication engineering point-to-point microwave systems

    CERN Document Server

    Kizer, George

    2013-01-01

    The first book to cover all engineering aspects of microwave communication path design for the digital age Fixed point-to-point microwave systems provide moderate-capacity digital transmission between well-defined locations. Most popular in situations where fiber optics or satellite communication is impractical, it is commonly used for cellular or PCS site interconnectivity where digital connectivity is needed but not economically available from other sources, and in private networks where reliability is most important. Until now, no book has adequately treated all en

  15. Comparison of Dose When Prescribed to Point A and Point H for Brachytherapy in Cervical Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Ji Hyeong; Gim, Il Hwan; Hwang, Seon Boong; Kim, Woong; Im, Hyeong Seo; Gang, Jin Mook; Gim, Gi Hwan; Lee, Ah Ram [Dept. of Radiation Oncology, Korea Institute of Radiological and Medical Sciences, Seou (Korea, Republic of)

    2012-09-15

    The purpose of this study is to compare plans prescribed to point A with these prescribed to point H recommended by ABS (American Brachytherapy Society) in high dose rate intracavitary brachytherapy for cervical carcinoma. This study selected 103 patients who received HDR (High Dose Rate) brachytherapy using tandem and ovoids from March 2010 to January 2012. Point A, bladder point, and rectal point conform with Manchester System. Point H conforms with ABS recommendation. Also Sigmoid colon point, and vagina point were established arbitrarily. We examined distance between point A and point H. The percent dose at point A was calculated when 100% dose was prescribed to point H. Additionally, the percent dose at each reference points when dose is prescribed to point H and point A were calculated. The relative dose at point A was lower when point H was located inferior to point A. The relative doses at bladder, rectal, sigmoid colon, and vagina points were higher when point H was located superior to point A, and lower when point H was located inferior to point A. This study found out that as point H got located much superior to point A, the absorbed dose of surrounding normal organs became higher, and as point H got located much inferior to point A, the absorbed dose of surrounding normal organs became lower. This differences dose not seem to affect the treatment. However, we suggest this new point is worth being considered for the treatment of HDR if dose distribution and absorbed dose at normal organs have large differences between prescribed to point A and H.

  16. Multi-point laser spark generation for internal combustion engines using a spatial light modulator

    International Nuclear Information System (INIS)

    Lyon, Elliott; Kuang, Zheng; Dearden, Geoff; Cheng, Hua; Page, Vincent; Shenton, Tom

    2014-01-01

    This paper reports on a technique demonstrating for the first time successful multi-point laser-induced spark generation, which is variable in three dimensions and derived from a single laser beam. Previous work on laser ignition of internal combustion engines found that simultaneously igniting in more than one location resulted in more stable and faster combustion – a key potential advantage over conventional spark ignition. However, previous approaches could only generate secondary foci at fixed locations. The work reported here is an experimental technique for multi-point laser ignition, in which several sparks with arbitrary spatial location in three dimensions are created by variable diffraction of a pulsed single laser beam source and transmission through an optical plug. The diffractive multi-beam arrays and patterns are generated using a spatial light modulator on which computer generated holograms are displayed. A gratings and lenses algorithm is used to accurately modulate the phase of the input laser beam and create multi-beam output. The underpinning theory, experimental arrangement and results obtained are presented and discussed. (paper)

  17. APPLICABILITY OF VARIOUS INTERPOLATION APPROACHES FOR HIGH RESOLUTION SPATIAL MAPPING OF CLIMATE DATA IN KOREA

    Directory of Open Access Journals (Sweden)

    A. Jo

    2018-04-01

    Full Text Available The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA. Automatic Weather System (AWS and Automated Synoptic Observing System (ASOS data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478 and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM with 30 m resolution, inverse distance weighting (IDW, co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.

  18. Throughput analysis of point-to-multi-point hybric FSO/RF network

    KAUST Repository

    Rakia, Tamer; Gebali, Fayez; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2017-01-01

    This paper presents and analyzes a point-to-multi-point (P2MP) network that uses a number of free-space optical (FSO) links for data transmission from the central node to the different remote nodes. A common backup radio-frequency (RF) link is used

  19. Spatial dispersion modeling of 90Sr by point cumulative semivariogram at Keban Dam Lake, Turkey

    International Nuclear Information System (INIS)

    Kuelahci, Fatih; Sen, Zekai

    2007-01-01

    Spatial analysis of 90 Sr artificial radionuclide in consequence of global fallout and Chernobyl nuclear accident has been carried out by using the point cumulative semivariogram (PCSV) technique based on 40 surface water station measurements in Keban Dam Lake during March, April, and May 2006. This technique is a convenient tool in obtaining the regional variability features around each sampling point, which yields the structural effects also in the vicinity of the same point. It presents the regional effect of all the other sites within the study area on the site concerned. In order to see to change of 90 Sr, the five models are constituted. Additionally, it provides a measure of cumulative similarity of the regional variable, 90 Sr, around any measurement site and hence it is possible to draw regional similarity maps at any desired distance around each station. In this paper, such similarity maps are also drawn for a set of distances. 90 Sr activities in lake that distance approximately 4.5 km from stations show the maximum similarity

  20. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  1. Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes

    Science.gov (United States)

    Ozcan, O.

    2016-12-01

    Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.

  2. Spatial resolution in optical transition radiation (OTR) beam diagnostics

    International Nuclear Information System (INIS)

    Castellano, M.; Verzilov, V. A.

    1998-06-01

    An evaluation of the OTR single particle image dimension is obtained using diffraction theory based on a realistic description of the radiation source. This approach allows the analysis of the effect of the finite size of the emitting screen and of the imaging system. The role of practical experimental conditions in treating the intensity tail problem is estimated. It is shown that by exploiting the polarization properties of OTR, a considerable enhancement in the spatial resolution can be achieved, which becomes very similar to that of a standard point source

  3. Spatial point pattern analysis of human settlements and geographical associations in eastern coastal China - a case study.

    Science.gov (United States)

    Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping

    2014-03-10

    Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley's K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning.

  4. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    Science.gov (United States)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  5. DEVELOPMENT OF SPATIAL SCALING TECHNIQUE OF FOREST HEALTH SAMPLE POINT INFORMATION

    Directory of Open Access Journals (Sweden)

    J. H. Lee

    2018-04-01

    Full Text Available Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016. Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015 were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  6. Synchronized observations of bright points from the solar photosphere to the corona

    Science.gov (United States)

    Tavabi, Ehsan

    2018-05-01

    One of the most important features in the solar atmosphere is the magnetic network and its relationship to the transition region (TR) and coronal brightness. It is important to understand how energy is transported into the corona and how it travels along the magnetic field lines between the deep photosphere and chromosphere through the TR and corona. An excellent proxy for transportation is the Interface Region Imaging Spectrograph (IRIS) raster scans and imaging observations in near-ultraviolet (NUV) and far-ultraviolet (FUV) emission channels, which have high time, spectral and spatial resolutions. In this study, we focus on the quiet Sun as observed with IRIS. The data with a high signal-to-noise ratio in the Si IV, C II and Mg II k lines and with strong emission intensities show a high correlation with TR bright network points. The results of the IRIS intensity maps and dopplergrams are compared with those of the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) instruments onboard the Solar Dynamical Observatory (SDO). The average network intensity profiles show a strong correlation with AIA coronal channels. Furthermore, we applied simultaneous observations of the magnetic network from HMI and found a strong relationship between the network bright points in all levels of the solar atmosphere. These features in the network elements exhibited regions of high Doppler velocity and strong magnetic signatures. Plenty of corona bright points emission, accompanied by the magnetic origins in the photosphere, suggest that magnetic field concentrations in the network rosettes could help to couple the inner and outer solar atmosphere.

  7. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV Imagery

    Directory of Open Access Journals (Sweden)

    Arko Lucieer

    2012-05-01

    Full Text Available Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud ( < 1–3 cm point spacing is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from 50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion can be monitored.

  8. Evaluating spatial interaction of soil property with non-point source pollution at watershed scale: The phosphorus indicator in Northeast China

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Wei, E-mail: wei@itc.nl; Huang, Haobo; Hao, Fanghua; Shan, Yushu; Guo, Bobo

    2012-08-15

    To better understand the spatial dynamics of non-point source (NPS) phosphorus loading with soil property at watershed scale, integrated modeling and soil chemistry is crucial to ensure that the indicator is functioning properly and expressing the spatial interaction at two depths. Developments in distributed modeling have greatly enriched the availability of geospatial data analysis and assess the NPS pollution loading response to soil property over larger area. The 1.5 km-grid soil sampling at two depths was analyzed with eight parameters, which provided detailed spatial and vertical soil data under four main types of landuses. The impacts of landuse conversion and agricultural practice on soil property were firstly identified. Except for the slightly bigger total of potassium (TK) and cadmium (Cr), the other six parameters had larger content in 20-40 cm surface than the top 20 cm surface. The Soil and Water Assessment Tool was employed to simulate the loading of NPS phosphorus. Overlaying with the landuse distribution, it was found that the NPS phosphorus mainly comes from the subbasins dominated with upland and paddy rice. The linear correlations of eight soil parameters at two depths with NPS phosphorus loading in the subbasins of upland and paddy rice were compared, respectively. The correlations of available phosphorus (AP), total phosphorus (TP), total nitrogen (TN) and TK varied in two depths, and also can assess the loading. The soil with lower soil organic carbon (SOC) presented a significant higher risk for NPS phosphorus loading, especially in agricultural area. The Principal Component Analysis showed that the TP and zinc (Zn) in top soil and copper (Cu) and Cr in subsurface can work as indicators. The analysis suggested that the application of soil property indicators is useful for assessing NPS phosphorus loss, which is promising for water safety in agricultural area. -- Highlights: Black-Right-Pointing-Pointer Spatial dynamics of NPS phosphorus

  9. Improved point-kinetics model for the BWR control rod drop accident

    International Nuclear Information System (INIS)

    Neogy, P.; Wakabayashi, T.; Carew, J.F.

    1985-01-01

    A simple prescription to account for spatial feedback weighting effects in RDA (rod drop accident) point-kinetics analyses has been derived and tested. The point-kinetics feedback model is linear in the core peaking factor, F/sub Q/, and in the core average void fraction and fuel temperature. Comparison with detailed spatial kinetics analyses indicates that the improved point-kinetics model provides an accurate description of the BWR RDA

  10. Managing distance and covariate information with point-based clustering

    Directory of Open Access Journals (Sweden)

    Peter A. Whigham

    2016-09-01

    Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for

  11. Flow speed measurement using two-point collective light scattering

    International Nuclear Information System (INIS)

    Heinemeier, N.P.

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au)

  12. Improved Spatial Resolution in Thick, Fully-Depleted CCDs withEnhanced Red Sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Fairfield, Jessamyn A.; Groom, Donald E.; Bailey, Stephen J.; Bebek, Christopher J.; Holland, Stephen E.; Karcher, Armin; Kolbe,William F.; Lorenzon, Wolfgang; Roe, Natalie A.

    2006-03-09

    The point spread function (PSF) is an important measure of spatial resolution in CCDs for point-like objects, since it affects image quality and spectroscopic resolution. We present new data and theoretical developments for lateral charge diffusion in thick, fully-depleted charge-coupled devices (CCDs) developed at Lawrence Berkeley National Laboratory (LBNL). Because they can be over-depleted, the LBNL devices have no field-free region and diffusion is controlled through the application of an external bias voltage. We give results for a 3512 x 3512 format, 10.5 {micro}m pixel back-illuminated p-channel CCD developed for the SuperNova/Acceleration Probe (SNAP), a proposed satellite-based experiment designed to study dark energy. The PSF was measured at substrate bias voltages between 3 V and 115 V. At a bias voltage of 115 V, we measure an rms diffusion of 3.7 {+-} 0.2 {micro}m. Lateral charge diffusion in LBNL CCDs will meet the SNAP requirements.

  13. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons

  14. UAV-based detection and spatial analyses of periglacial landforms on Demay Point (King George Island, South Shetland Islands, Antarctica)

    Science.gov (United States)

    Dąbski, Maciej; Zmarz, Anna; Pabjanek, Piotr; Korczak-Abshire, Małgorzata; Karsznia, Izabela; Chwedorzewska, Katarzyna J.

    2017-08-01

    High-resolution aerial images allow detailed analyses of periglacial landforms, which is of particular importance in light of climate change and resulting changes in active layer thickness. The aim of this study is to show possibilities of using UAV-based photography to perform spatial analysis of periglacial landforms on the Demay Point peninsula, King George Island, and hence to supplement previous geomorphological studies of the South Shetland Islands. Photogrammetric flights were performed using a PW-ZOOM fixed-winged unmanned aircraft vehicle. Digital elevation models (DEM) and maps of slope and contour lines were prepared in ESRI ArcGIS 10.3 with the Spatial Analyst extension, and three-dimensional visualizations in ESRI ArcScene 10.3 software. Careful interpretation of orthophoto and DEM, allowed us to vectorize polygons of landforms, such as (i) solifluction landforms (solifluction sheets, tongues, and lobes); (ii) scarps, taluses, and a protalus rampart; (iii) patterned ground (hummocks, sorted circles, stripes, nets and labyrinths, and nonsorted nets and stripes); (iv) coastal landforms (cliffs and beaches); (v) landslides and mud flows; and (vi) stone fields and bedrock outcrops. We conclude that geomorphological studies based on commonly accessible aerial and satellite images can underestimate the spatial extent of periglacial landforms and result in incomplete inventories. The PW-ZOOM UAV is well suited to gather detailed geomorphological data and can be used in spatial analysis of periglacial landforms in the Western Antarctic Peninsula region.

  15. Beginning SharePoint 2010 Administration Windows SharePoint Foundation 2010 and Microsoft SharePoint Server 2010

    CERN Document Server

    Husman, Göran

    2010-01-01

    Complete coverage on the latest advances in SharePoint 2010 administration. SharePoint 2010 comprises an abundance of new features, and this book shows you how to take advantage of all SharePoint 2010's many improvements. Written by a four-time SharePoint MVP, Beginning SharePoint 2010 Administration begins with a comparison of SharePoint 2010 compared to the previous version and then examines the differences between WSS 4.0 and MSS 2010. Packed with step-by-step instructions, tips and tricks, and real-world examples, this book dives into the basics of how to install, manage, and administrate

  16. Network based approaches reveal clustering in protein point patterns

    Science.gov (United States)

    Parker, Joshua; Barr, Valarie; Aldridge, Joshua; Samelson, Lawrence E.; Losert, Wolfgang

    2014-03-01

    Recent advances in super-resolution imaging have allowed for the sub-diffraction measurement of the spatial location of proteins on the surfaces of T-cells. The challenge is to connect these complex point patterns to the internal processes and interactions, both protein-protein and protein-membrane. We begin analyzing these patterns by forming a geometric network amongst the proteins and looking at network measures, such the degree distribution. This allows us to compare experimentally observed patterns to models. Specifically, we find that the experimental patterns differ from heterogeneous Poisson processes, highlighting an internal clustering structure. Further work will be to compare our results to simulated protein-protein interactions to determine clustering mechanisms.

  17. Basic examination of in-plane spatial resolution in multi-slice CT

    International Nuclear Information System (INIS)

    Hara, Takanori; Kato, Hideki; Akiyama, Mitsutoshi; Murata, Katsutoshi

    2002-01-01

    In computed tomography (single-slice spiral CT, conventional CT), in-plane (x-y plane) spatial resolution is consistently identified as depending on the detector density of the in-plane (x-y plane). However, we considered that the in-plane (x-y plane) spatial resolution of multi-slice CT (MSCT) was influenced by an error in the detector's sensitivity to the Z-axis and by the frequency of use of direct row data and complementary row data when the image of spiral pitches (SP) was reconstructed. Our goal in this experiment was to analyze the relationship of the in-plane (x-y plane) spatial resolution of an asymmetric-type detector in MSCT to SP, tube current, and rotation time. By employing a tungsten wire phantom of 0.2 mm in diameter, we examined modulation transfer functions (MTF) by point-spread functions (PSF) of CT-images. Next, using the mean-square-root bandwidth theory, we analyzed the MTF of wire phantoms. The analysis of in-plane (x-y plane) spatial resolution revealed that various tube currents had no effect on the value of the mean-square-root bandwidth. However, rotation time and high spiral pitch did have an effect on mean-square-root bandwidth. Considering the results mentioned above, spiral pitch (z-axis reconstruction algorithm) had a slight effect on in-plane (x-y plane) spatial resolution of asymmetric-type detectors in MSCT. Accordingly, we proposed a new general view of VDDz (view/mm) in MSCT that considered view data density on the Z-axis according to spiral pitch (mm/rotation), rotation time (view/rotation), and slice collimation. (author)

  18. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  19. Seed Dispersal, Microsites or Competition—What Drives Gap Regeneration in an Old-Growth Forest? An Application of Spatial Point Process Modelling

    Directory of Open Access Journals (Sweden)

    Georg Gratzer

    2018-04-01

    Full Text Available The spatial structure of trees is a template for forest dynamics and the outcome of a variety of processes in ecosystems. Identifying the contribution and magnitude of the different drivers is an age-old task in plant ecology. Recently, the modelling of a spatial point process was used to identify factors driving the spatial distribution of trees at stand scales. Processes driving the coexistence of trees, however, frequently unfold within gaps and questions on the role of resource heterogeneity within-gaps have become central issues in community ecology. We tested the applicability of a spatial point process modelling approach for quantifying the effects of seed dispersal, within gap light environment, microsite heterogeneity, and competition on the generation of within gap spatial structure of small tree seedlings in a temperate, old growth, mixed-species forest. By fitting a non-homogeneous Neyman–Scott point process model, we could disentangle the role of seed dispersal from niche partitioning for within gap tree establishment and did not detect seed densities as a factor explaining the clustering of small trees. We found only a very weak indication for partitioning of within gap light among the three species and detected a clear niche segregation of Picea abies (L. Karst. on nurse logs. The other two dominating species, Abies alba Mill. and Fagus sylvatica L., did not show signs of within gap segregation.

  20. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    Science.gov (United States)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  1. Spatial Point Pattern Analysis of Human Settlements and Geographical Associations in Eastern Coastal China — A Case Study

    Science.gov (United States)

    Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping

    2014-01-01

    Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley’s K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning. PMID:24619117

  2. Spatial distribution patterns of plague hosts : point pattern analysis of the burrows of great gerbils in Kazakhstan

    NARCIS (Netherlands)

    Wilschut, Liesbeth I; Laudisoit, Anne; Hughes, Nelika K; Addink, Elisabeth A; de Jong, Steven M; Heesterbeek, Hans A P; Reijniers, Jonas; Eagle, Sally; Dubyanskiy, Vladimir M; Begon, Mike

    AIM: The spatial structure of a population can strongly influence the dynamics of infectious diseases, yet rarely is the underlying structure quantified. A case in point is plague, an infectious zoonotic disease caused by the bacterium Yersinia pestis. Plague dynamics within the Central Asian desert

  3. A Studentized Permutation Test for the Comparison of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Hahn, Ute

    of empirical K-functions are compared by a permutation test using a studentized test statistic. The proposed test performs convincingly in terms of empirical level and power in a simulation study, even for point patterns where the K-function estimates on neighboring subsamples are not strictly exchangeable....... It also shows improved behavior compared to a test suggested by Diggle et al. (1991, 2000) for the comparison of groups of independently replicated point patterns. In an application to two point patterns from pathology that represent capillary positions in sections of healthy and tumorous tissue, our...

  4. On Motion Planning for Point-to-Point Maneuvers for a Class of Sailing Vehicles

    DEFF Research Database (Denmark)

    Xiao, Lin; Jouffroy, Jerome

    2011-01-01

    Despite their interesting dynamic and controllability properties, sailing vehicles have not been much studied in the control community. In this paper, we investigate motion planning of such vehicles. Starting from a simple dynamic model of sailing vessels in one dimension, this paper first...... considers their associated controllability issues, with the so-called no-sailing zone as a starting point, and it links them with a motion planning strategy using two-point boundary value problems as the main mathematical tool. This perspective is then expanded to do point-to-point maneuvers of sailing...

  5. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  6. Evaluation of 2-point, 3-point, and 6-point Dixon magnetic resonance imaging with flexible echo timing for muscle fat quantification.

    Science.gov (United States)

    Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus

    2018-06-01

    The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. The flux-coordinate independent approach applied to X-point geometries

    International Nuclear Information System (INIS)

    Hariri, F.; Hill, P.; Ottaviani, M.; Sarazin, Y.

    2014-01-01

    A Flux-Coordinate Independent (FCI) approach for anisotropic systems, not based on magnetic flux coordinates, has been introduced in Hariri and Ottaviani [Comput. Phys. Commun. 184, 2419 (2013)]. In this paper, we show that the approach can tackle magnetic configurations including X-points. Using the code FENICIA, an equilibrium with a magnetic island has been used to show the robustness of the FCI approach to cases in which a magnetic separatrix is present in the system, either by design or as a consequence of instabilities. Numerical results are in good agreement with the analytic solutions of the sound-wave propagation problem. Conservation properties are verified. Finally, the critical gain of the FCI approach in situations including the magnetic separatrix with an X-point is demonstrated by a fast convergence of the code with the numerical resolution in the direction of symmetry. The results highlighted in this paper show that the FCI approach can efficiently deal with X-point geometries

  8. LSAH: a fast and efficient local surface feature for point cloud registration

    Science.gov (United States)

    Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi

    2018-04-01

    Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.

  9. Bright point study

    International Nuclear Information System (INIS)

    Tang, F.; Harvey, K.; Bruner, M.; Kent, B.; Antonucci, E.

    1982-01-01

    Transition region and coronal observations of bright points by instruments aboard the Solar Maximum Mission and high resolution photospheric magnetograph observations on September 11, 1980 are presented. A total of 31 bipolar ephemeral regions were found in the photosphere from birth in 9.3 hours of combined magnetograph observations from three observatories. Two of the three ephemeral regions present in the field of view of the Ultraviolet Spectrometer-Polarimeter were observed in the C IV 1548 line. The unobserved ephemeral region was determined to be the shortest-lived (2.5 hr) and lowest in magnetic flux density (13G) of the three regions. The Flat Crystal Spectrometer observed only low level signals in the O VIII 18.969 A line, which were not statistically significant to be positively identified with any of the 16 ephemeral regions detected in the photosphere. In addition, the data indicate that at any given time there lacked a one-to-one correspondence between observable bright points and photospheric ephemeral regions, while more ephemeral regions were observed than their counterparts in the transition region and the corona

  10. Inflection point inflation and time dependent potentials in string theory

    International Nuclear Information System (INIS)

    Itzhaki, Nissan; Kovetz, Ely D.

    2007-01-01

    We consider models of inflection point inflation. The main drawback of such models is that they suffer from the overshoot problem. Namely the initial condition should be fine tuned to be near the inflection point for the universe to inflate. We show that stringy realizations of inflection point inflation are common and offer a natural resolution to the overshoot problem

  11. Dynamics and mission design near libration points

    CERN Document Server

    Gómez, G; Llibre, J; Martínez, R

    2001-01-01

    It is well known that the restricted three-body problem has triangular equilibrium points. These points are linearly stable for values of the mass parameter, µ , below Routh's critical value, µ 1 . It is also known that in the spatial case they are nonlinearly stable, not for all the initial conditions in a neighborhood of the equilibrium points L 4 , L 5 but for a set of relatively large measures. This follows from the celebrated Kolmogorov-Arnold-Moser theorem. In fact there are neighborhoods of computable size for which one obtains "practical stability" in the sense that the massless partic

  12. Tilted Light Sheet Microscopy with 3D Point Spread Functions for Single-Molecule Super-Resolution Imaging in Mammalian Cells.

    Science.gov (United States)

    Gustavsson, Anna-Karin; Petrov, Petar N; Lee, Maurice Y; Shechtman, Yoav; Moerner, W E

    2018-02-01

    To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

  13. Te and ne profiles on JFT-2M plasma with the highest spatial resolution TV Thomson scattering system

    International Nuclear Information System (INIS)

    Yamauchi, T.

    1993-01-01

    A high spatial resolution TV Thomson scattering system was constructed on JFT-2M tokamak. This system is similar to those used at PBX-M and TFTR. These systems are providing complete profiles of Te and ne at a single time during a plasma discharge. The characteristics of JFT-2M TVTS are as follows: 1. Measured points are composed of not only 81 points for the scattered light and plasma light, whose time difference is 2 ms, but also 10 points for plasma light measured at the same time with scattered light. 2. Spatial resolution is 0.86 cm, which is higher than any other Thomson scattering system. 3. Sensitivity of detector composed of image intensifier tubes and CCD is as high as that of photomultiplier tube. Te and ne profiles have been measured over one year on JFT-2M. The line-averaged electron density measured was in the region of 5x10 12 cm -3 - 7x10 13 cm -3 and the measured electron temperature was in the region of 50 eV -1.2 keV. (author) 7 refs., 7 figs., 1 tab

  14. Monitoring hillslope moisture dynamics with surface ERT for enhancing spatial significance of hydrometric point measurements

    Science.gov (United States)

    Hübner, R.; Heller, K.; Günther, T.; Kleber, A.

    2015-01-01

    Besides floodplains, hillslopes are basic units that mainly control water movement and flow pathways within catchments of subdued mountain ranges. The structure of their shallow subsurface affects water balance, e.g. infiltration, retention, and runoff. Nevertheless, there is still a gap in the knowledge of the hydrological dynamics on hillslopes, notably due to the lack of generalization and transferability. This study presents a robust multi-method framework of electrical resistivity tomography (ERT) in addition to hydrometric point measurements, transferring hydrometric data into higher spatial scales to obtain additional patterns of distribution and dynamics of soil moisture on a hillslope. A geoelectrical monitoring in a small catchment in the eastern Ore Mountains was carried out at weekly intervals from May to December 2008 to image seasonal moisture dynamics on the hillslope scale. To link water content and electrical resistivity, the parameters of Archie's law were determined using different core samples. To optimize inversion parameters and methods, the derived spatial and temporal water content distribution was compared to tensiometer data. The results from ERT measurements show a strong correlation with the hydrometric data. The response is congruent to the soil tension data. Water content calculated from the ERT profile shows similar variations as that of water content from soil moisture sensors. Consequently, soil moisture dynamics on the hillslope scale may be determined not only by expensive invasive punctual hydrometric measurements, but also by minimally invasive time-lapse ERT, provided that pedo-/petrophysical relationships are known. Since ERT integrates larger spatial scales, a combination with hydrometric point measurements improves the understanding of the ongoing hydrological processes and better suits identification of heterogeneities.

  15. Wrox SharePoint 2010 SharePoint911 three-pack

    CERN Document Server

    Klindt, Todd; Mason, Jennifer; Rogers, Laura; Drisgill, Randy; Ross, John; Riemann, Larry; Perran, Amanda; Perran, Shane; Sanford, Jacob J; Stubbs, Paul; Caravajal, Steve

    2012-01-01

    The Wrox SharePoint 2010 SharePoint911 Three-Pack combines the contents of three full e-books written by the experts from SharePoint911.  That's over 1800 pages of hands-on advice from Todd Klindt, Shane Young, Laura Rogers, Randy Drisgill, Jennifer Mason, John Ross, and Larry Riemann, among others. In Beginning SharePoint 2010: Building Business Solutions with SharePoint (ISBN 978-0-470-61789-2) by Amanda Perran, Shane Perran, Jennifer Mason, and Laura Rogers, readers learn the core concepts, terminology, and features of SharePoint 2010. In Professiona

  16. Beginning SharePoint 2010 Building Business Solutions with SharePoint

    CERN Document Server

    Perran, Amanda; Mason, Jennifer; Rogers, Laura

    2010-01-01

    Two SharePoint MVPs provide the ultimate introduction to SharePoint 2010Beginning SharePoint 2010: Building Team Solutions with SharePoint provides information workers and site managers with extensive knowledge and expert advice, empowering them to become SharePoint champions within their organizations.Provides expansive coverage of SharePoint topics, as well as specialty areas such as forms, excel services, records management, and web content managementDetails realistic usage scenarios, and includes practice examples that highlight best practices for configuration and customizationIncludes de

  17. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  18. Point process-based modeling of multiple debris flow landslides using INLA: an application to the 2009 Messina disaster

    KAUST Repository

    Lombardo, Luigi

    2018-02-13

    We develop a stochastic modeling approach based on spatial point processes of log-Gaussian Cox type for a collection of around 5000 landslide events provoked by a precipitation trigger in Sicily, Italy. Through the embedding into a hierarchical Bayesian estimation framework, we can use the integrated nested Laplace approximation methodology to make inference and obtain the posterior estimates of spatially distributed covariate and random effects. Several mapping units are useful to partition a given study area in landslide prediction studies. These units hierarchically subdivide the geographic space from the highest grid-based resolution to the stronger morphodynamic-oriented slope units. Here we integrate both mapping units into a single hierarchical model, by treating the landslide triggering locations as a random point pattern. This approach diverges fundamentally from the unanimously used presence–absence structure for areal units since we focus on modeling the expected landslide count jointly within the two mapping units. Predicting this landslide intensity provides more detailed and complete information as compared to the classically used susceptibility mapping approach based on relative probabilities. To illustrate the model’s versatility, we compute absolute probability maps of landslide occurrences and check their predictive power over space. While the landslide community typically produces spatial predictive models for landslides only in the sense that covariates are spatially distributed, no actual spatial dependence has been explicitly integrated so far. Our novel approach features a spatial latent effect defined at the slope unit level, allowing us to assess the spatial influence that remains unexplained by the covariates in the model. For rainfall-induced landslides in regions where the raingauge network is not sufficient to capture the spatial distribution of the triggering precipitation event, this latent effect provides valuable imaging support

  19. Spatial Point Pattern Analysis of Human Settlements and Geographical Associations in Eastern Coastal China — A Case Study

    Directory of Open Access Journals (Sweden)

    Zhonghao Zhang

    2014-03-01

    Full Text Available Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS approach, Ripley’s K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning.

  20. Investigation of spatial resolution characteristics of an in vivo microcomputed tomography system

    Energy Technology Data Exchange (ETDEWEB)

    Ghani, Muhammad U. [Center for Biomedical engineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019 (United States); Zhou, Zhongxing [Center for Biomedical engineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019 (United States); School of Precision and Optoelectronics Engineering, Tianjin University, Tianjin 300072 (China); Ren, Liqiang; Wong, Molly; Li, Yuhua; Zheng, Bin [Center for Biomedical engineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019 (United States); Yang, Kai [Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114 (United States); Liu, Hong, E-mail: liu@ou.edu [Center for Biomedical engineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019 (United States)

    2016-01-21

    The spatial resolution characteristics of an in vivo microcomputed tomography (CT) system was investigated in the in-plane (x–y), cross plane (z) and projection imaging modes. The microCT system utilized in this study employs a flat panel detector with a 127 µm pixel pitch, a microfocus x-ray tube with a focal spot size ranging from 5–30 µm, and accommodates three geometric magnifications (M) of 1.72, 2.54 and 5.10. The in-plane modulation transfer function (MTF) curves were measured as a function of the number of projections, geometric magnification (M), detector binning and reconstruction magnification (M{sub Recon}). The in plane cutoff frequency (10% MTF) ranged from 2.31 lp/mm (M=1.72, 2×2 binning) to 12.56 lp/mm (M=5.10, 1×1 binning) and a bar pattern phantom validated those measurements. A slight degradation in the spatial resolution was observed when comparing the image reconstruction with 511 and 918 projections, whose effect was visible at the lower frequencies. Small value of M{sub Recon} has little or no impact on the in-plane spatial resolution owning to a stable system. Large value of M{sub Recon} has implications on the spatial resolution and it was evident when comparing the bar pattern images reconstructed with M{sub Recon}=1.25 and 2.5. The cross plane MTF curves showed that the spatial resolution increased as the slice thickness decreased. The cutoff frequencies in the projection imaging mode yielded slightly higher values as compared to the in-plane and cross plane modes at all the geometric magnifications (M). At M=5.10, the cutoff resolution of the projection and cross plane on an ultra-high contrast resolution bar chip phantom were 14.9 lp/mm and 13–13.5 lp/mm. Due to the finite focal spot size of the x-ray tube, the detector blur and the reconstruction kernel functions, the system's spatial resolution does not reach the limiting spatial resolution as defined by the Nyquist's detector criteria with an ideal point source

  1. Network Competition - the Coexistence of Hub-and-Spoke and Point-to-Point Systems

    NARCIS (Netherlands)

    Alderighi, M.; Cento, A.; Nijkamp, P.; Rietveld, P.

    2005-01-01

    The paper identifies conditions under which asymmetric equilibria may exist when carriers compete in designing their network configurations in a game-theoretical framework. Two carriers are assumed here, which are allowed to play three different strategies: Point-to-point (PP), hub-and-spoke (HS) or

  2. A method to analyze “source–sink” structure of non-point source pollution based on remote sensing technology

    International Nuclear Information System (INIS)

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-01-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the “source–sink” theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of “source” of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km 2 in 2008, and the “sink” was 172.06 km 2 . The “source” of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the “sink” was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of “source” gets weaker along with the distance from the seas boundary increase, while “sink” gets stronger. -- Highlights: •We built an index to study the “source–sink” structure of NSP in a space scale. •The Index was applied in Jiulongjiang estuary and got a well result. •The study is beneficial to discern the high load area of non-point source pollution. -- “Source–Sink” Structure of non-point source nitrogen and phosphorus pollution in Jiulongjiang estuary in China was worked out by the Grid Landscape Contrast Index

  3. Flow speed measurement using two-point collective light scattering

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeier, N.P

    1998-09-01

    Measurements of turbulence in plasmas and fluids using the technique of collective light scattering have always been plagued by very poor spatial resolution. In 1994, a novel two-point collective light scattering system for the measurement of transport in a fusion plasma was proposed. This diagnostic method was design for a great improvement of the spatial resolution, without sacrificing accuracy in the velocity measurement. The system was installed at the W7-AS steallartor in Garching, Germany, in 1996, and has been operating since. This master thesis is an investigation of the possible application of this new method to the measurement of flow speeds in normal fluids, in particular air, although the results presented in this work have significance for the plasma measurements as well. The main goal of the project was the experimental verification of previous theoretical predictions. However, the theoretical considerations presented in the thesis show that the method can only be hoped to work for flows that are almost laminar and shearless, which makes it of very small practical interest. Furthermore, this result also implies that the diagnostic at W7-AS cannot be expected to give the results originally hoped for. (au) 1 tab., 51 ills., 29 refs.

  4. SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES

    Directory of Open Access Journals (Sweden)

    F. Poux

    2016-10-01

    Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  5. Spectrum of classes of point emitters of electromagnetic wave fields.

    Science.gov (United States)

    Castañeda, Román

    2016-09-01

    The spectrum of classes of point emitters has been introduced as a numerical tool suitable for the design, analysis, and synthesis of non-paraxial optical fields in arbitrary states of spatial coherence. In this paper, the polarization state of planar electromagnetic wave fields is included in the spectrum of classes, thus increasing its modeling capabilities. In this context, optical processing is realized as a filtering on the spectrum of classes of point emitters, performed by the complex degree of spatial coherence and the two-point correlation of polarization, which could be implemented dynamically by using programmable optical devices.

  6. Determine point-to-point networking interactions using regular expressions

    Directory of Open Access Journals (Sweden)

    Konstantin S. Deev

    2015-06-01

    Full Text Available As Internet growth and becoming more popular, the number of concurrent data flows start to increasing, which makes sense in bandwidth requested. Providers and corporate customers need ability to identify point-to-point interactions. The best is to use special software and hardware implementations that distribute the load in the internals of the complex, using the principles and approaches, in particular, described in this paper. This paper represent the principles of building system, which searches for a regular expression match using computing on graphics adapter in server station. A significant computing power and capability to parallel execution on modern graphic processor allows inspection of large amounts of data through sets of rules. Using the specified characteristics can lead to increased computing power in 30…40 times compared to the same setups on the central processing unit. The potential increase in bandwidth capacity could be used in systems that provide packet analysis, firewalls and network anomaly detectors.

  7. Dynamic Raman imaging system with high spatial and temporal resolution

    Science.gov (United States)

    Wang, Lei; Dai, Yinzhen; He, Hao; Lv, Ruiqi; Zong, Cheng; Ren, Bin

    2017-09-01

    There is an increasing need to study dynamic changing systems with significantly high spatial and temporal resolutions. In this work, we integrated point-scanning, line-scanning, and wide-field Raman imaging techniques into a single system. By using an Electron Multiplying CCD (EMCCD) with a high gain and high frame rate, we significantly reduced the time required for wide-field imaging, making it possible to monitor the electrochemical reactions in situ. The highest frame rate of EMCDD was ˜50 fps, and the Raman images for a specific Raman peak can be obtained by passing the signal from the sample through the Liquid Crystal Tunable Filter. The spatial resolutions of scanning imaging and wide-field imaging with a 100× objective (NA = 0.9) are 0.5 × 0.5 μm2 and 0.36 × 0.36 μm2, respectively. The system was used to study the surface plasmon resonance of Au nanorods, the surface-enhanced Raman scattering signal distribution for Au Nanoparticle aggregates, and dynamic Raman imaging of an electrochemical reacting system.

  8. Chandra ACIS Sub-pixel Resolution

    Science.gov (United States)

    Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.

    2011-05-01

    We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy

  9. On the estimation of the spherical contact distribution Hs(y) for spatial point processes

    International Nuclear Information System (INIS)

    Doguwa, S.I.

    1990-08-01

    RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab

  10. A proton point source produced by laser interaction with cone-top-end target

    International Nuclear Information System (INIS)

    Yu, Jinqing; Jin, Xiaolin; Zhou, Weimin; Zhao, Zongqing; Yan, Yonghong; Li, Bin; Hong, Wei; Gu, Yuqiu

    2012-01-01

    In this paper, we propose a proton point source by the interaction of laser and cone-top-end target and investigate it by two-dimensional particle-in-cell (2D-PIC) simulations as the proton point sources are well known for higher spatial resolution of proton radiography. Our results show that the relativistic electrons are guided to the rear of the cone-top-end target by the electrostatic charge-separation field and self-generated magnetic field along the profile of the target. As a result, the peak magnitude of sheath field at the rear surface of cone-top-end target is higher compared to common cone target. We test this scheme by 2D-PIC simulation and find the result has a diameter of 0.79λ 0 , an average energy of 9.1 MeV and energy spread less than 35%.

  11. Tilted light sheet microscopy with 3D point spread functions for single-molecule super-resolution imaging in mammalian cells

    Science.gov (United States)

    Gustavsson, Anna-Karin; Petrov, Petar N.; Lee, Maurice Y.; Shechtman, Yoav; Moerner, W. E.

    2018-02-01

    To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D superresolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

  12. Critical Point in Self-Organized Tissue Growth

    Science.gov (United States)

    Aguilar-Hidalgo, Daniel; Werner, Steffen; Wartlick, Ortrud; González-Gaitán, Marcos; Friedrich, Benjamin M.; Jülicher, Frank

    2018-05-01

    We present a theory of pattern formation in growing domains inspired by biological examples of tissue development. Gradients of signaling molecules regulate growth, while growth changes these graded chemical patterns by dilution and advection. We identify a critical point of this feedback dynamics, which is characterized by spatially homogeneous growth and proportional scaling of patterns with tissue length. We apply this theory to the biological model system of the developing wing of the fruit fly Drosophila melanogaster and quantitatively identify signatures of the critical point.

  13. Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators using Generalized Pattern Search

    Directory of Open Access Journals (Sweden)

    Thi Rein Myo

    2008-11-01

    Full Text Available Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.

  14. Spatial resolution requirements for traffic-related air pollutant exposure evaluations

    Science.gov (United States)

    Batterman, Stuart; Chambliss, Sarah; Isakov, Vlad

    2014-09-01

    Vehicle emissions represent one of the most important air pollution sources in most urban areas, and elevated concentrations of pollutants found near major roads have been associated with many adverse health impacts. To understand these impacts, exposure estimates should reflect the spatial and temporal patterns observed for traffic-related air pollutants. This paper evaluates the spatial resolution and zonal systems required to estimate accurately intraurban and near-road exposures of traffic-related air pollutants. The analyses use the detailed information assembled for a large (800 km2) area centered on Detroit, Michigan, USA. Concentrations of nitrogen oxides (NOx) due to vehicle emissions were estimated using hourly traffic volumes and speeds on 9700 links representing all but minor roads in the city, the MOVES2010 emission model, the RLINE dispersion model, local meteorological data, a temporal resolution of 1 h, and spatial resolution as low as 10 m. Model estimates were joined with the corresponding shape files to estimate residential exposures for 700,000 individuals at property parcel, census block, census tract, and ZIP code levels. We evaluate joining methods, the spatial resolution needed to meet specific error criteria, and the extent of exposure misclassification. To portray traffic-related air pollutant exposure, raster or inverse distance-weighted interpolations are superior to nearest neighbor approaches, and interpolations between receptors and points of interest should not exceed about 40 m near major roads, and 100 m at larger distances. For census tracts and ZIP codes, average exposures are overestimated since few individuals live very near major roads, the range of concentrations is compressed, most exposures are misclassified, and high concentrations near roads are entirely omitted. While smaller zones improve performance considerably, even block-level data can misclassify many individuals. To estimate exposures and impacts of traffic

  15. Address Points - COUNTY_ADDRESS_POINTS_IDHS_IN: Address Points Maintained by County Agencies in Indiana (Indiana Department of Homeland Security, Point feature class)

    Data.gov (United States)

    NSGIC State | GIS Inventory — COUNTY_ADDRESS_POINTS_IDHS_IN is an ESRI Geodatabase point feature class that contains address points maintained by county agencies in Indiana, provided by personnel...

  16. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  17. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  18. Ecohydrology and tipping points in semiarid australian rangelands

    Science.gov (United States)

    Saco, P. M.; Azadi, S.; Moreno de las Heras, M.; Willgoose, G. R.

    2017-12-01

    Semiarid landscapes are often characterised by a spatially heterogeneous vegetation cover forming mosaics of patches with dense vegetation within bare soil. This patchy vegetation cover, which is linked to the healthy function of these ecosystems, is sensitive to human disturbances that can lead to degradation. Previous work suggests that vegetation loss below a critical value can lead to a sudden decrease in landscape functionality following threshold behaviour. The decrease in vegetation cover is linked to erosion and substantial water losses by increasing landscape hydrological connectivity. We study these interactions and the possible existence of tipping points in the Mulga land bioregion, by combining remote sensing observations and results from an eco-geomorphologic model to investigate changes in ecosystem connectivity and the existence of threshold behaviour. More than 30 sites were selected along a precipitation gradient spanning a range from approximately 250 to 500 mm annual rainfall. The analysis of vegetation patterns is derived from high resolution remote sensing images (IKONOS, QuickBird, Pleiades) and MODIS NDVI, which combined with local precipitation data is used to compute rainfall use efficiency to assess the ecosystem function. A critical tipping point associated to loss of vegetation cover appears in the sites with lower annual precipitation. We found that this tipping point behaviour decreases for sites with higher rainfall. We use the model to investigate the relation between structural and functional connectivity and the emergence of threshold behaviour for selected plots along this precipitation gradient. Both observations and modelling results suggest that sites with higher rainfall are more resilient to changes in surface connectivity. The implications for ecosystem resilience and land management are discussed

  19. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    Science.gov (United States)

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  20. Material-Point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2007-01-01

    The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  1. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  2. December 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data (latitude and longitude) from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs)...

  3. April 2004 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data (Geodetic Coordinates) from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from...

  4. March 2003 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data (Geodetic Coordinates) from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from...

  5. EXTRACTION OF BUILDING BOUNDARY LINES FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y.-H. Tseng

    2016-10-01

    Full Text Available Building boundary lines are important spatial features that characterize the topographic maps and three-dimensional (3D city models. Airborne LiDAR Point clouds provide adequate 3D spatial information for building boundary mapping. However, information of boundary features contained in point clouds is implicit. This study focuses on developing an automatic algorithm of building boundary line extraction from airborne LiDAR data. In an airborne LiDAR dataset, top surfaces of buildings, such as roofs, tend to have densely distributed points, but vertical surfaces, such as walls, usually have sparsely distributed points or even no points. The intersection lines of roof and wall planes are, therefore, not clearly defined in point clouds. This paper proposes a novel method to extract those boundary lines of building edges. The extracted line features can be used as fundamental data to generate topographic maps of 3D city model for an urban area. The proposed method includes two major process steps. The first step is to extract building boundary points from point clouds. Then the second step is followed to form building boundary line features based on the extracted boundary points. In this step, a line fitting algorithm is developed to improve the edge extraction from LiDAR data. Eight test objects, including 4 simple low buildings and 4 complicated tall buildings, were selected from the buildings in NCKU campus. The test results demonstrate the feasibility of the proposed method in extracting complicate building boundary lines. Some results which are not as good as expected suggest the need of further improvement of the method.

  6. Application of change-point problem to the detection of plant patches.

    Science.gov (United States)

    López, I; Gámez, M; Garay, J; Standovár, T; Varga, Z

    2010-03-01

    In ecology, if the considered area or space is large, the spatial distribution of individuals of a given plant species is never homogeneous; plants form different patches. The homogeneity change in space or in time (in particular, the related change-point problem) is an important research subject in mathematical statistics. In the paper, for a given data system along a straight line, two areas are considered, where the data of each area come from different discrete distributions, with unknown parameters. In the paper a method is presented for the estimation of the distribution change-point between both areas and an estimate is given for the distributions separated by the obtained change-point. The solution of this problem will be based on the maximum likelihood method. Furthermore, based on an adaptation of the well-known bootstrap resampling, a method for the estimation of the so-called change-interval is also given. The latter approach is very general, since it not only applies in the case of the maximum-likelihood estimation of the change-point, but it can be also used starting from any other change-point estimation known in the ecological literature. The proposed model is validated against typical ecological situations, providing at the same time a verification of the applied algorithms.

  7. Comparison of a UAV-derived point-cloud to Lidar data at Haig Glacier, Alberta, Canada

    Science.gov (United States)

    Bash, E. A.; Moorman, B.; Montaghi, A.; Menounos, B.; Marshall, S. J.

    2016-12-01

    The use of unmanned aerial vehicles (UAVs) is expanding rapidly in glaciological research as a result of technological improvements that make UAVs a cost-effective solution for collecting high resolution datasets with relative ease. The cost and difficult access traditionally associated with performing fieldwork in glacial environments makes UAVs a particularly attractive tool. In the small, but growing, body of literature using UAVs in glaciology the accuracy of UAV data is tested through the comparison of a UAV-derived DEM to measured control points. A field campaign combining simultaneous lidar and UAV flights over Haig Glacier in April 2015, provided the unique opportunity to directly compare UAV data to lidar. The UAV was a six-propeller Mikrokopter carrying a Panasonic Lumix DMC-GF1 camera with a 12 Megapixel Live MOS sensor and Lumix G 20 mm lens flown at a height of 90 m, resulting in sub-centimetre ground resolution per image pixel. Lidar data collection took place April 20, while UAV flights were conducted April 20-21. A set of 65 control points were laid out and surveyed on the glacier surface on April 19 and 21 using a RTK GPS with a vertical uncertainty of 5 cm. A direct comparison of lidar points to these control points revealed a 9 cm offset between the control points and the lidar points on average, but the difference changed distinctly from points collected on April 19 versus those collected April 21 (7 cm and 12 cm). Agisoft Photoscan was used to create a point-cloud from imagery collected with the UAV and CloudCompare was used to calculate the difference between this and the lidar point cloud, revealing an average difference of less than 17 cm. This field campaign also highlighted some of the benefits and drawbacks of using a rotary UAV for glaciological research. The vertical takeoff and landing capabilities, combined with quick responsiveness and higher carrying capacity, make the rotary vehicle favourable for high-resolution photos when

  8. Femtosecond few- to single-electron point-projection microscopy for nanoscale dynamic imaging

    Directory of Open Access Journals (Sweden)

    A. R. Bainbridge

    2016-03-01

    Full Text Available Femtosecond electron microscopy produces real-space images of matter in a series of ultrafast snapshots. Pulses of electrons self-disperse under space-charge broadening, so without compression, the ideal operation mode is a single electron per pulse. Here, we demonstrate femtosecond single-electron point projection microscopy (fs-ePPM in a laser-pump fs-e-probe configuration. The electrons have an energy of only 150 eV and take tens of picoseconds to propagate to the object under study. Nonetheless, we achieve a temporal resolution with a standard deviation of 114 fs (equivalent to a full-width at half-maximum of 269 ± 40 fs combined with a spatial resolution of 100 nm, applied to a localized region of charge at the apex of a nanoscale metal tip induced by 30 fs 800 nm laser pulses at 50 kHz. These observations demonstrate real-space imaging of reversible processes, such as tracking charge distributions, is feasible whilst maintaining femtosecond resolution. Our findings could find application as a characterization method, which, depending on geometry, could resolve tens of femtoseconds and tens of nanometres. Dynamically imaging electric and magnetic fields and charge distributions on sub-micron length scales opens new avenues of ultrafast dynamics. Furthermore, through the use of active compression, such pulses are an ideal seed for few-femtosecond to attosecond imaging applications which will access sub-optical cycle processes in nanoplasmonics.

  9. Forgotten Features of Head Zones and Their Relation to Diagnostically Relevant Acupuncture Points

    Directory of Open Access Journals (Sweden)

    Florian Beissner

    2011-01-01

    Full Text Available In the 1890s Sir Henry Head discovered certain areas of the skin that develop tenderness (allodynia in the course of visceral disease. These areas were later termed “Head zones”. In addition, he also emphasized the existence of specific points within these zones, that he called “maximum points”, a finding that seems to be almost forgotten today. We hypothesized that two important groups of acupuncture points, the diagnostically relevant Mu and Shu points, spatially and functionally coincide with these maximum points to a large extent. A comparison of Head's papers with the Huang Di Neijing (Yellow Thearch's Inner Classic and the Zhen Jiu Jia Yi Jing (Systematic Classic of Acupuncture and Moxibustion, two of the oldest still extant Chinese sources on acupuncture, revealed astonishing parallels between the two concepts regarding both point locations and functional aspects. These findings suggest that the Chinese discovery of viscerocutaneous reflexes preceded the discovery in the West by more than 2000 years. Furthermore, the fact that Chinese medicine uses Mu and Shu points not only diagnostically but also therapeutically may give us new insights into the underlying mechanisms of acupuncture.

  10. Merging LIDAR digital terrain model with direct observed elevation points for urban flood numerical simulation

    Science.gov (United States)

    Arrighi, Chiara; Campo, Lorenzo

    2017-04-01

    In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.

  11. Peak capacity, peak-capacity production rate, and boiling point resolution for temperature-programmed GC with very high programming rates

    Science.gov (United States)

    Grall; Leonard; Sacks

    2000-02-01

    Recent advances in column heating technology have made possible very fast linear temperature programming for high-speed gas chromatography. A fused-silica capillary column is contained in a tubular metal jacket, which is resistively heated by a precision power supply. With very rapid column heating, the rate of peak-capacity production is significantly enhanced, but the total peak capacity and the boiling-point resolution (minimum boiling-point difference required for the separation of two nonpolar compounds on a nonpolar column) are reduced relative to more conventional heating rates used with convection-oven instruments. As temperature-programming rates increase, elution temperatures also increase with the result that retention may become insignificant prior to elution. This results in inefficient utilization of the down-stream end of the column and causes a loss in the rate of peak-capacity production. The rate of peak-capacity production is increased by the use of shorter columns and higher carrier gas velocities. With high programming rates (100-600 degrees C/min), column lengths of 6-12 m and average linear carrier gas velocities in the 100-150 cm/s range are satisfactory. In this study, the rate of peak-capacity production, the total peak capacity, and the boiling point resolution are determined for C10-C28 n-alkanes using 6-18 m long columns, 50-200 cm/s average carrier gas velocities, and 60-600 degrees C/min programming rates. It was found that with a 6-meter-long, 0.25-mm i.d. column programmed at a rate of 600 degrees C/min, a maximum peak-capacity production rate of 6.1 peaks/s was obtained. A total peak capacity of about 75 peaks was produced in a 37-s long separation spanning a boiling-point range from n-C10 (174 degrees C) to n-C28 (432 degrees C).

  12. Tipping Point

    Medline Plus

    Full Text Available ... en español Blog About OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by ... danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe ...

  13. Robust tie points selection for InSAR image coregistration

    Science.gov (United States)

    Skanderi, Takieddine; Chabira, Boulerbah; Afifa, Belkacem; Belhadj Aissa, Aichouche

    2013-10-01

    Image coregistration is an important step in SAR interferometry which is a well known method for DEM generation and surface displacement monitoring. A practical and widely used automatic coregistration algorithm is based on selecting a number of tie points in the master image and looking for the correspondence of each point in the slave image using correlation technique. The characteristics of these points, their number and their distribution have a great impact on the reliability of the estimated transformation. In this work, we present a method for automatic selection of suitable tie points that are well distributed over the common area without decreasing the desired tie points' number. First we select candidate points using Harris operator. Then from these points we select tie points depending on their cornerness measure (the highest first). Once a tie point is selected, its correspondence is searched for in the slave image, if the similarity measure maximum is less than a given threshold or it is at the border of the search window, this point is discarded and we proceed to the next Harris point, else, the cornerness of the remaining candidates Harris points are multiplied by a spatially radially increasing function centered at the selected point to disadvantage the points in a neighborhood of a radius determined from the size of the common area and the desired number of points. This is repeated until the desired number of points is selected. Results of an ERS1/2 tandem pair are presented and discussed.

  14. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  15. Sensing with Superconducting Point Contacts

    Directory of Open Access Journals (Sweden)

    Argo Nurbawono

    2012-05-01

    Full Text Available Superconducting point contacts have been used for measuring magnetic polarizations, identifying magnetic impurities, electronic structures, and even the vibrational modes of small molecules. Due to intrinsically small energy scale in the subgap structures of the supercurrent determined by the size of the superconducting energy gap, superconductors provide ultrahigh sensitivities for high resolution spectroscopies. The so-called Andreev reflection process between normal metal and superconductor carries complex and rich information which can be utilized as powerful sensor when fully exploited. In this review, we would discuss recent experimental and theoretical developments in the supercurrent transport through superconducting point contacts and their relevance to sensing applications, and we would highlight their current issues and potentials. A true utilization of the method based on Andreev reflection analysis opens up possibilities for a new class of ultrasensitive sensors.

  16. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  17. Predicting wildfire occurrence distribution with spatial point process models and its uncertainty assessment: a case study in the Lake Tahoe Basin, USA

    Science.gov (United States)

    Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner

    2015-01-01

    Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...

  18. Resolving Point Defects in the Hydration Structure of Calcite (10.4) with Three-Dimensional Atomic Force Microscopy

    Science.gov (United States)

    Söngen, Hagen; Reischl, Bernhard; Miyata, Kazuki; Bechstein, Ralf; Raiteri, Paolo; Rohl, Andrew L.; Gale, Julian D.; Fukuma, Takeshi; Kühnle, Angelika

    2018-03-01

    It seems natural to assume that defects at mineral surfaces critically influence interfacial processes such as the dissolution and growth of minerals in water. The experimental verification of this claim, however, is challenging and requires real-space methods with utmost spatial resolution, such as atomic force microscopy (AFM). While defects at mineral-water interfaces have been resolved in 2D AFM images before, the perturbation of the surrounding hydration structure has not yet been analyzed experimentally. In this Letter, we demonstrate that point defects on the most stable and naturally abundant calcite (10.4) surface can be resolved using high-resolution 3D AFM—even within the fifth hydration layer. Our analysis of the hydration structure surrounding the point defect shows a perturbation of the hydration with a lateral extent of approximately one unit cell. These experimental results are corroborated by molecular dynamics simulations.

  19. Interval Mathematics Applied to Critical Point Transitions

    Directory of Open Access Journals (Sweden)

    Benito A. Stradi

    2012-03-01

    Full Text Available The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. This study uses an interval Newton/Generalized Bisection algorithm that provides a mathematical and computational guarantee that all mixture critical points are located. The technique is illustrated using several example problems. These problems involve cubic equation of state models; however, the technique is general purpose and can be applied in connection with other nonlinear problems.

  20. Scattering and absorption of particles emitted by a point source in a cluster of point scatterers

    International Nuclear Information System (INIS)

    Liljequist, D.

    2012-01-01

    A theory for the scattering and absorption of particles isotropically emitted by a point source in a cluster of point scatterers is described and related to the theory for the scattering of an incident particle beam. The quantum mechanical probability of escape from the cluster in different directions is calculated, as well as the spatial distribution of absorption events within the cluster. A source strength renormalization procedure is required. The average quantum scattering in clusters with randomly shifting scatterer positions is compared to trajectory simulation with the aim of studying the validity of the trajectory method. Differences between the results of the quantum and trajectory methods are found primarily for wavelengths larger than the average distance between nearest neighbour scatterers. The average quantum results include, for example, a local minimum in the number of absorption events at the location of the point source and interference patterns in the angle-dependent escape probability as well as in the distribution of absorption events. The relative error of the trajectory method is in general, though not generally, of similar magnitude as that obtained for beam scattering.

  1. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning

    Directory of Open Access Journals (Sweden)

    Victoria Plaza-Leiva

    2017-03-01

    Full Text Available Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM, Gaussian processes (GP, and Gaussian mixture models (GMM. A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl. Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  2. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    Science.gov (United States)

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  3. Spatial resolution limits for the isotropic-3D PET detector X’tal cube

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Eiji, E-mail: rush@nirs.go.jp; Tashima, Hideaki; Hirano, Yoshiyuki; Inadama, Naoko; Nishikido, Fumihiko; Murayama, Hideo; Yamaya, Taiga

    2013-11-11

    Positron emission tomography (PET) has become a popular imaging method in metabolism, neuroscience, and molecular imaging. For dedicated human brain and small animal PET scanners, high spatial resolution is needed to visualize small objects. To improve the spatial resolution, we are developing the X’tal cube, which is our new PET detector to achieve isotropic 3D positioning detectability. We have shown that the X’tal cube can achieve 1 mm{sup 3} uniform crystal identification performance with the Anger-type calculation even at the block edges. We plan to develop the X’tal cube with even smaller 3D grids for sub-millimeter crystal identification. In this work, we investigate spatial resolution of a PET scanner based on the X’tal cube using Monte Carlo simulations for predicting resolution performance in smaller 3D grids. For spatial resolution evaluation, a point source emitting 511 keV photons was simulated by GATE for all physical processes involved in emission and interaction of positrons. We simulated two types of animal PET scanners. The first PET scanner had a detector ring 14.6 cm in diameter composed of 18 detectors. The second PET scanner had a detector ring 7.8 cm in diameter composed of 12 detectors. After the GATE simulations, we converted the interacting 3D position information to digitalized positions for realistic segmented crystals. We simulated several X’tal cubes with cubic crystals from (0.5 mm){sup 3} to (2 mm){sup 3} in size. Also, for evaluating the effect of DOI resolution, we simulated several X’tal cubes with crystal thickness from (0.5 mm){sup 3} to (9 mm){sup 3}. We showed that sub-millimeter spatial resolution was possible using cubic crystals smaller than (1.0 mm){sup 3} even with the assumed physical processes. Also, the weighted average spatial resolutions of both PET scanners with (0.5 mm){sup 3} cubic crystals were 0.53 mm (14.6 cm ring diameter) and 0.48 mm (7.8 cm ring diameter). For the 7.8 cm ring diameter, spatial

  4. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  5. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  6. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  7. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    Science.gov (United States)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues

  8. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  9. MIN-CUT BASED SEGMENTATION OF AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    S. Ural

    2012-07-01

    Full Text Available Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance

  10. Influence of the distance between target surface and focal point on the expansion dynamics of a laser-induced silicon plasma with spatial confinement

    Science.gov (United States)

    Zhang, Dan; Chen, Anmin; Wang, Xiaowei; Wang, Ying; Sui, Laizhi; Ke, Da; Li, Suyu; Jiang, Yuanfei; Jin, Mingxing

    2018-05-01

    Expansion dynamics of a laser-induced plasma plume, with spatial confinement, for various distances between the target surface and focal point were studied by the fast photography technique. A silicon wafer was ablated to induce the plasma with a Nd:YAG laser in an atmospheric environment. The expansion dynamics of the plasma plume depended on the distance between the target surface and focal point. In addition, spatially confined time-resolved images showed the different structures of the plasma plumes at different distances between the target surface and focal point. By analyzing the plume images, the optimal distance for emission enhancement was found to be approximately 6 mm away from the geometrical focus using a 10 cm focal length lens. This optimized distance resulted in the strongest compression ratio of the plasma plume by the reflected shock wave. Furthermore, the duration of the interaction between the reflected shock wave and the plasma plume was also prolonged.

  11. Detector Motion Method to Increase Spatial Resolution in Photon-Counting Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong [Korea Advanced Institute of Science and Technology, Daejon (Korea, Republic of)

    2017-03-15

    Medical imaging requires high spatial resolution of an image to identify fine lesions. Photoncounting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former's high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55-μm-pixel image was achieved by application of the proposed method to a 110-μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.

  12. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    Science.gov (United States)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models

  13. Low-Cost Ultra-High Spatial and Temporal Resolution Mapping of Intertidal Rock Platforms

    Science.gov (United States)

    Bryson, M.; Johnson-Roberson, M.; Murphy, R.

    2012-07-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time which could compliment field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at relatively course, sub-meter resolutions or with limited temporal resolutions and relatively high costs for small-scale environmental science and ecology studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric pipeline that was developed for constructing highresolution, 3D, photo-realistic terrain models of intertidal rocky shores. The processing pipeline uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine colour and topographic information at sub-centimeter resolutions over an area of approximately 100m, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rock platform at Cape Banks, Sydney, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.

  14. Controllable resonant tunnelling through single-point potentials: A point triode

    International Nuclear Information System (INIS)

    Zolotaryuk, A.V.; Zolotaryuk, Yaroslav

    2015-01-01

    A zero-thickness limit of three-layer heterostructures under two bias voltages applied externally, where one of which is supposed to be a gate parameter, is studied. As a result, an effect of controllable resonant tunnelling of electrons through single-point potentials is shown to exist. Therefore the limiting structure may be termed a “point triode” and considered in the theory of point interactions as a new object. The simple limiting analytical expressions adequately describe the resonant behaviour in the transistor with realistic parameter values and thus one can conclude that the zero-range limit of multi-layer structures may be used in fabricating nanodevices. The difference between the resonant tunnelling across single-point potentials and the Fabry–Pérot interference effect is also emphasized. - Highlights: • The zero-thickness limit of three-layer heterostructures is described in terms of point interactions. • The effect of resonant tunnelling through these single-point potentials is established. • The resonant tunnelling is shown to be controlled by a gate voltage

  15. Optimal External-Memory Planar Point Enclosure

    DEFF Research Database (Denmark)

    Arge, Lars; Samoladas, Vasilis; Yi, Ke

    2007-01-01

    .g. spatial and temporal databases, and is dual to the important and well-studied orthogonal range searching problem. Surprisingly, despite the fact that the problem can be solved optimally in internal memory with linear space and O(log N+K) query time, we show that one cannot construct a linear sized......In this paper we study the external memory planar point enclosure problem: Given N axis-parallel rectangles in the plane, construct a data structure on disk (an index) such that all K rectangles containing a query point can be reported I/O-efficiently. This problem has important applications in e...... external memory point enclosure data structure that can be used to answer a query in O(log  B N+K/B) I/Os, where B is the disk block size. To obtain this bound, Ω(N/B 1−ε ) disk blocks are needed for some constant ε>0. With linear space, the best obtainable query bound is O(log 2 N+K/B) if a linear output...

  16. Geostatistical analysis of disease data: accounting for spatial support and population density in the isopleth mapping of cancer mortality risk using area-to-point Poisson kriging

    Directory of Open Access Journals (Sweden)

    Goovaerts Pierre

    2006-11-01

    Full Text Available Abstract Background Geostatistical techniques that account for spatially varying population sizes and spatial patterns in the filtering of choropleth maps of cancer mortality were recently developed. Their implementation was facilitated by the initial assumption that all geographical units are the same size and shape, which allowed the use of geographic centroids in semivariogram estimation and kriging. Another implicit assumption was that the population at risk is uniformly distributed within each unit. This paper presents a generalization of Poisson kriging whereby the size and shape of administrative units, as well as the population density, is incorporated into the filtering of noisy mortality rates and the creation of isopleth risk maps. An innovative procedure to infer the point-support semivariogram of the risk from aggregated rates (i.e. areal data is also proposed. Results The novel methodology is applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1 state of Indiana that consists of 92 counties of fairly similar size and shape, and 2 four states in the Western US (Arizona, California, Nevada and Utah forming a set of 118 counties that are vastly different geographical units. Area-to-point (ATP Poisson kriging produces risk surfaces that are less smooth than the maps created by a naïve point kriging of empirical Bayesian smoothed rates. The coherence constraint of ATP kriging also ensures that the population-weighted average of risk estimates within each geographical unit equals the areal data for this unit. Simulation studies showed that the new approach yields more accurate predictions and confidence intervals than point kriging of areal data where all counties are simply collapsed into their respective polygon centroids. Its benefit over point kriging increases as the county geography becomes more heterogeneous. Conclusion A major limitation of choropleth

  17. Geostatistical analysis of disease data: accounting for spatial support and population density in the isopleth mapping of cancer mortality risk using area-to-point Poisson kriging

    Science.gov (United States)

    Goovaerts, Pierre

    2006-01-01

    Background Geostatistical techniques that account for spatially varying population sizes and spatial patterns in the filtering of choropleth maps of cancer mortality were recently developed. Their implementation was facilitated by the initial assumption that all geographical units are the same size and shape, which allowed the use of geographic centroids in semivariogram estimation and kriging. Another implicit assumption was that the population at risk is uniformly distributed within each unit. This paper presents a generalization of Poisson kriging whereby the size and shape of administrative units, as well as the population density, is incorporated into the filtering of noisy mortality rates and the creation of isopleth risk maps. An innovative procedure to infer the point-support semivariogram of the risk from aggregated rates (i.e. areal data) is also proposed. Results The novel methodology is applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. Area-to-point (ATP) Poisson kriging produces risk surfaces that are less smooth than the maps created by a naïve point kriging of empirical Bayesian smoothed rates. The coherence constraint of ATP kriging also ensures that the population-weighted average of risk estimates within each geographical unit equals the areal data for this unit. Simulation studies showed that the new approach yields more accurate predictions and confidence intervals than point kriging of areal data where all counties are simply collapsed into their respective polygon centroids. Its benefit over point kriging increases as the county geography becomes more heterogeneous. Conclusion A major limitation of choropleth maps is the common biased

  18. Seasonal and spatial variation of diffuse (non-point) source zinc pollution in a historically metal mined river catchment, UK

    Energy Technology Data Exchange (ETDEWEB)

    Gozzard, E., E-mail: emgo@ceh.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Mayes, W.M., E-mail: W.Mayes@hull.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Potter, H.A.B., E-mail: hugh.potter@environment-agency.gov.uk [Environment Agency England and Wales, c/o Institute for Research on Environment and Sustainability, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Jarvis, A.P., E-mail: a.p.jarvis@ncl.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2011-10-15

    Quantifying diffuse sources of pollution is becoming increasingly important when characterising river catchments in entirety - a prerequisite for environmental management. This study examines both low and high flow events, as well as spatial variability, in order to assess point and diffuse components of zinc pollution within the River West Allen catchment, which lies within the northern England lead-zinc Orefield. Zinc levels in the river are elevated under all flow regimes, and are of environmental concern. Diffuse components are of little importance at low flow, with point source mine water discharges dominating instream zinc concentration and load. During higher river flows 90% of the instream zinc load is attributed to diffuse sources, where inputs from resuspension of metal-rich sediments, and groundwater influx are likely to be more dominant. Remediating point mine water discharges should significantly improve water quality at lower flows, but contribution from diffuse sources will continue to elevate zinc flux at higher flows. - Highlights: > Zinc concentrations breach EU quality thresholds under all river flow conditions. > Contributions from point sources dominate instream zinc dynamics in low flow. > Contributions from diffuse sources dominate instream zinc dynamics in high flow. > Important diffuse sources include river-bed sediment resuspension and groundwater influx. > Diffuse sources would still create significant instream pollution, even with point source treatment. - Diffuse zinc sources are an important source of instream contamination to mine-impacted rivers under varying flow conditions.

  19. On the Relation Between Facular Bright Points and the Magnetic Field

    Science.gov (United States)

    Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran

    1994-12-01

    Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross

  20. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  1. Tipping Point

    Medline Plus

    Full Text Available ... OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point ... 24 hours a day. For young children whose home is a playground, it’s the best way to ...

  2. High-spatial-resolution localization algorithm based on cascade deconvolution in a distributed Sagnac interferometer invasion monitoring system.

    Science.gov (United States)

    Pi, Shaohua; Wang, Bingjie; Zhao, Jiang; Sun, Qi

    2016-10-10

    In the Sagnac fiber optic interferometer system, the phase difference signal can be illustrated as a convolution of the waveform of the invasion with its occurring-position-associated transfer function h(t); deconvolution is introduced to improve the spatial resolution of the localization. In general, to get a 26 m spatial resolution at a sampling rate of 4×106  s-1, the algorithm should mainly go through three steps after the preprocessing operations. First, the decimated phase difference signal is transformed from the time domain into the real cepstrum domain, where a probable region of invasion distance can be ascertained. Second, a narrower region of invasion distance is acquired by coarsely assuming and sweeping a transfer function h(t) within the probable region and examining where the restored invasion waveform x(t) gets its minimum standard deviation. Third, fine sweeping the narrow region point by point with the same criteria is used to get the final localization. Also, the original waveform of invasion can be restored for the first time as a by-product, which provides more accurate and pure characteristics for further processing, such as subsequent pattern recognition.

  3. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    Science.gov (United States)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  4. Optimization of the spatial resolution for the GE discovery PET/CT 710 by using NEMA NU 2-2007 standards

    Science.gov (United States)

    Yoon, Hyun Jin; Jeong, Young Jin; Son, Hye Joo; Kang, Do-Young; Hyun, Kyung-Yae; Lee, Min-Kyung

    2015-01-01

    The spatial resolution in positron emission tomography (PET) is fundamentally limited by the geometry of the detector element, the positron's recombination range with electrons, the acollinearity of the positron, the crystal decoding error, the penetration into the detector ring, and the reconstruction algorithms. In this paper, optimized parameters are suggested to produce high-resolution PET images by using an iterative reconstruction algorithm. A phantom with three point sources structured with three capillary tubes was prepared with an axial extension of less than 1 mm and was filled with 18F-fluorodeoxyglucose (18F-FDG) with concentrations above 200 MBq/cc. The performance measures of all the PET images were acquired according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standards procedures. The parameters for the iterative reconstruction were adjusted around the values recommended by General Electric GE, and the optimized values of the spatial resolution and the full width at half maximum (FWHM) or the full width at tenth of maximum (FWTM) values were found for the best PET resolution. The axial and the transverse spatial resolutions, according to the filtered back-projection (FBP) at 1 cm off-axis, were 4.81 and 4.48 mm, respectively. The axial and the transaxial spatial resolutions at 10 cm off-axis were 5.63 mm and 5.08 mm, respectively, and the trans-axial resolution at 10 cm was evaluated as the average of the radial and the tangential measurements. The recommended optimized parameters of the spatial resolution according to the NEMA phantom for the number of subsets, the number of iterations, and the Gaussian post-filter are 12, 3, and 3 mm for the iterative reconstruction VUE Point HD without the SharpIR algorithm (HD), and 12, 12, and 5.2 mm with SharpIR (HD.S), respectively, according to the Advantage Workstation Volume Share 5 (AW4.6). The performance measurements for the GE Discovery PET/CT 710 using the NEMA NU 2

  5. Point-point and point-line moving-window correlation spectroscopy and its applications

    Science.gov (United States)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  6. Zero-point oscillations, zero-point fluctuations, and fluctuations of zero-point oscillations

    International Nuclear Information System (INIS)

    Khalili, Farit Ya

    2003-01-01

    Several physical effects and methodological issues relating to the ground state of an oscillator are considered. Even in the simplest case of an ideal lossless harmonic oscillator, its ground state exhibits properties that are unusual from the classical point of view. In particular, the mean value of the product of two non-negative observables, kinetic and potential energies, is negative in the ground state. It is shown that semiclassical and rigorous quantum approaches yield substantially different results for the ground state energy fluctuations of an oscillator with finite losses. The dependence of zero-point fluctuations on the boundary conditions is considered. Using this dependence, it is possible to transmit information without emitting electromagnetic quanta. Fluctuations of electromagnetic pressure of zero-point oscillations are analyzed, and the corresponding mechanical friction is considered. This friction can be viewed as the most fundamental mechanism limiting the quality factor of mechanical oscillators. Observation of these effects exceeds the possibilities of contemporary experimental physics but almost undoubtedly will be possible in the near future. (methodological notes)

  7. User requirements Massive Point Clouds for eSciences (WP1)

    NARCIS (Netherlands)

    Suijker, P.M.; Alkemade, I.; Kodde, M.P.; Nonhebel, A.E.

    2014-01-01

    This report is a milestone in work package 1 (WP1) of the project Massive point clouds for eSciences. In WP1 the basic functionalities needed for a new Point Cloud Spatial Database Management System are identified. This is achieved by (1) literature research, (2) discussions with the project

  8. Throughput analysis of point-to-multi-point hybric FSO/RF network

    KAUST Repository

    Rakia, Tamer

    2017-07-31

    This paper presents and analyzes a point-to-multi-point (P2MP) network that uses a number of free-space optical (FSO) links for data transmission from the central node to the different remote nodes. A common backup radio-frequency (RF) link is used by the central node for data transmission to any remote node in case of the failure of any one of FSO links. We develop a cross-layer Markov chain model to study the throughput from central node to a tagged remote node. Numerical examples are presented to compare the performance of the proposed P2MP hybrid FSO/RF network with that of a P2MP FSO-only network and show that the P2MP Hybrid FSO/RF network achieves considerable performance improvement over the P2MP FSO-only network.

  9. Gradients estimation from random points with volumetric tensor in turbulence

    Science.gov (United States)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  10. INHOMOGENEITY IN SPATIAL COX POINT PROCESSES – LOCATION DEPENDENT THINNING IS NOT THE ONLY OPTION

    Directory of Open Access Journals (Sweden)

    Michaela Prokešová

    2010-11-01

    Full Text Available In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process. This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.

  11. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  12. Fixed Points

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. AN ACCURACY ASSESSMENT OF GEOREFERENCED POINT CLOUDS PRODUCED VIA MULTI-VIEW STEREO TECHNIQUES APPLIED TO IMAGERY ACQUIRED VIA UNMANNED AERIAL VEHICLE

    Directory of Open Access Journals (Sweden)

    S. Harwin

    2012-08-01

    Full Text Available Low-cost Unmanned Aerial Vehicles (UAVs are becoming viable environmental remote sensing tools. Sensor and battery technology is expanding the data capture opportunities. The UAV, as a close range remote sensing platform, can capture high resolution photography on-demand. This imagery can be used to produce dense point clouds using multi-view stereopsis techniques (MVS combining computer vision and photogrammetry. This study examines point clouds produced using MVS techniques applied to UAV and terrestrial photography. A multi-rotor micro UAV acquired aerial imagery from a altitude of approximately 30–40 m. The point clouds produced are extremely dense (<1–3 cm point spacing and provide a detailed record of the surface in the study area, a 70 m section of sheltered coastline in southeast Tasmania. Areas with little surface texture were not well captured, similarly, areas with complex geometry such as grass tussocks and woody scrub were not well mapped. The process fails to penetrate vegetation, but extracts very detailed terrain in unvegetated areas. Initially the point clouds are in an arbitrary coordinate system and need to be georeferenced. A Helmert transformation is applied based on matching ground control points (GCPs identified in the point clouds to GCPs surveying with differential GPS. These point clouds can be used, alongside laser scanning and more traditional techniques, to provide very detailed and precise representations of a range of landscapes at key moments. There are many potential applications for the UAV-MVS technique, including coastal erosion and accretion monitoring, mine surveying and other environmental monitoring applications. For the generated point clouds to be used in spatial applications they need to be converted to surface models that reduce dataset size without loosing too much detail. Triangulated meshes are one option, another is Poisson Surface Reconstruction. This latter option makes use of point normal

  14. Demerit points systems.

    NARCIS (Netherlands)

    2006-01-01

    In 2012, 21 of the 27 EU Member States had some form of demerit points system. In theory, demerit points systems contribute to road safety through three mechanisms: 1) prevention of unsafe behaviour through the risk of receiving penalty points, 2) selection and suspension of the most frequent

  15. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  16. Characteristics of a multi-keV monochromatic point x-ray source

    Indian Academy of Sciences (India)

    Temporal, spatial and spectral characteristics of a multi-keV monochromatic point x-ray source based on vacuum diode with laser-produced plasma as cathode are presented. Electrons from a laser-produced aluminium plasma were accelerated towards a conical point tip titanium anode to generate K-shell x-ray radiation.

  17. People Detection Based on Spatial Mapping of Friendliness and Floor Boundary Points for a Mobile Navigation Robot

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Tasaki

    2011-01-01

    Full Text Available Navigation robots must single out partners requiring navigation and move in the cluttered environment where people walk around. Developing such robots requires two different people detections: detecting partners and detecting all moving people around the robots. For detecting partners, we design divided spaces based on the spatial relationships and sensing ranges. Mapping the friendliness of each divided space based on the stimulus from the multiple sensors to detect people calling robots positively, robots detect partners on the highest friendliness space. For detecting moving people, we regard objects’ floor boundary points in an omnidirectional image as obstacles. We classify obstacles as moving people by comparing movement of each point with robot movement using odometry data, dynamically changing thresholds to detect. Our robot detected 95.0% of partners while it stands by and interacts with people and detected 85.0% of moving people while robot moves, which was four times higher than previous methods did.

  18. Improved Spatial Resolution in Thick, Fully-Depleted CCDs with Enhanced Red Sensitivity

    International Nuclear Information System (INIS)

    Fairfield, Jessamyn A.

    2005-01-01

    The point spread function (PSF) is an important measure of spatial resolution in CCDs for point-like objects, since it can affect use in imaging and spectroscopic applications. We present new data and theoretical developments in the study of lateral charge diffusion in thick, fully-depleted charge-coupled devices (CCDs) developed at Lawrence Berkeley National Laboratory (LBNL). Because they are fully depleted, the LBNL devices have no field-free region, and diffusion can be controlled through the application of an external bias voltage. We give results for a 3512x3512 format, 10.5 ?m pixel back-illuminated p-channel CCD developed for the SuperNova/Acceleration Probe (SNAP), a proposed satellite-based experiment designed to study dark energy. The PSF was measured at substrate bias voltages between 3 V and 115 V. At a bias voltage of 115V, we measure an rms diffusion of 3.7 ± 0.2 (micro)m. Lateral charge diffusion in LBNL CCDs is thus expected to meet the SNAP requirements

  19. Spatial reasoning with augmented points: Extending cardinal directions with local distances

    Directory of Open Access Journals (Sweden)

    Reinhard Moratz

    2012-12-01

    Full Text Available We present an approach for supplying existing qualitative direction calculi with a distance component to support fully fledged positional reasoning. The general underlying idea of augmenting points with local reference properties has already been applied in the OPRAm calculus. In this existing calculus, point objects are attached with a local reference direction to obtain oriented points and able to express relative direction using binary relations. We show how this approach can be extended to attach a granular distance concept to direction calculi such as the cardinal direction calculus or adjustable granularity calculi such as OPRAm or the Star calculus. We focus on the cardinal direction calculus and extend it to a multi-granular positional calculus called EPRAm. We provide a formal specification of EPRAm including a composition table for EPRA2 automatically determined using real algebraic geometry. We also report on an experimental performance analysis of EPRA2 in the context of a topological map-learning task proposed for benchmarking qualitative calculi. Our results confirm that our approach of adding a relative distance component to existing calculi improves the performance in realistic tasks when using algebraic closure for consistency checking.

  20. MODIS 250m burned area mapping based on an algorithm using change point detection and Markov random fields.

    Science.gov (United States)

    Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca

    2013-04-01

    Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.

  1. On the spatial errors and resolution of near tracks when parallel tracing by their images on photographs

    International Nuclear Information System (INIS)

    Ehrglis, K.Eh.

    1980-01-01

    Errors in the determination of spatial reference point (SRP) coordinates being reconstructed on the basis of photograph reference points are considered. The width of paths of probable track positions on photographs and the length of intersection zones of these paths with hampering track images are estimated. Conditions for a stable automatic tracing of closely traversing in space tracks are determined. The conclusion is made that of 5-6 SRP are accumulated the method of spatial tracing when shifting local scanning centres on photographs with a corresponding speed permits to trace automatically closely traversing tracks in the middle zone of the Merabel chamber when the angle between them is approximately 1 deg and the distance in space - 3-7 mm. It is emphasized that, when forecasting 8-10 SRP, the spatial or angle track resolution improves 1.5 times more due to the diminution of forecasting errors and corresponding narrowing of sensitivity paths. The described method will be especially effective when processing photographs taken in bubble chambers of a new generation at particle energies being tens-hundreds GeV [ru

  2. Characterization of positional errors and their influence on micro four-point probe measurements on a 100 nm Ru film

    DEFF Research Database (Denmark)

    Kjær, Daniel; Hansen, Ole; Østerberg, Frederik Westergaard

    2015-01-01

    Thin-film sheet resistance measurements at high spatial resolution and on small pads are important and can be realized with micrometer-scale four-point probes. As a result of the small scale the measurements are affected by electrode position errors. We have characterized the electrode position...... errors in measurements on Ru thin film using an Au-coated 12-point probe. We show that the standard deviation of the static electrode position error is on the order of 5 nm, which significantly affects the results of single configuration measurements. Position-error-corrected dual......-configuration measurements, however, are shown to eliminate the effect of position errors to a level limited either by electrical measurement noise or dynamic position errors. We show that the probe contact points remain almost static on the surface during the measurements (measured on an atomic scale) with a standard...

  3. WE-DE-207B-05: Measuring Spatial Resolution in Digital Breast Tomosynthesis: Update of AAPM Task Group 245

    Energy Technology Data Exchange (ETDEWEB)

    Scaduto, DA; Hu, Y-H; Zhao, W [Stony Brook Medicine, Stony Brook, NY (United States); Goodsitt, M; Chan, H-P [University Michigan, Ann Arbor, MI (United States); Olafsdottir, H [Image Owl, 105 Reykjavik (Iceland); Das, M [University Houston, Houston, TX (United States); Fredenberg, E [Philips Healthcare, Solna (Sweden); Geiser, W [UT MD Anderson Cancer Center, Houston, TX (United States); Goodenough, D [The George Washington University, Washington, DC (United States); Heid, P [ARCADES, Marseille (France); Liu, B [Massachusetts General Hospital, Boston, MA (United States); Mainprize, J [Sunnybrook Health Sciences Centre, North York, ON (Canada); Reiser, I [The University of Chicago, Chicago, IL (United States); Van Engen, R [LRCB, Nijmegen (Netherlands); Varchena, V [CIRS Inc., Norfolk, VA (United States); Vecchio, S [I.M.S., Pontecchio Marconi (Italy); Glick, S [Food and Drug Administration, Silver Spring, MD (United States)

    2016-06-15

    Purpose: Spatial resolution in digital breast tomosynthesis (DBT) is affected by inherent/binned detector resolution, oblique entry of x-rays, and focal spot size/motion; the limited angular range further limits spatial resolution in the depth-direction. While DBT is being widely adopted clinically, imaging performance metrics and quality control protocols have not been standardized. AAPM Task Group 245 on Tomosynthesis Quality Control has been formed to address this deficiency. Methods: Methods of measuring spatial resolution are evaluated using two prototype quality control phantoms for DBT. Spatial resolution in the detector plane is measured in projection and reconstruction domains using edge-spread function (ESF), point-spread function (PSF) and modulation transfer function (MTF). Spatial resolution in the depth-direction and effective slice thickness are measured in the reconstruction domain using slice sensitivity profile (SSP) and artifact spread function (ASF). An oversampled PSF in the depth-direction is measured using a 50 µm angulated tungsten wire, from which the MTF is computed. Object-dependent PSF is derived and compared with ASF. Sensitivity of these measurements to phantom positioning, imaging conditions and reconstruction algorithms is evaluated. Results are compared from systems of varying acquisition geometry (9–25 projections over 15–60°). Dependence of measurements on feature size is investigated. Results: Measurements of spatial resolution using PSF and LSF are shown to depend on feature size; depth-direction spatial resolution measurements are shown to similarly depend on feature size for ASF, though deconvolution with an object function removes feature size-dependence. A slanted wire may be used to measure oversampled PSFs, from which MTFs may be computed for both in-plane and depth-direction resolution. Conclusion: Spatial resolution measured using PSF is object-independent with sufficiently small object; MTF is object

  4. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    Science.gov (United States)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums

  5. Resolution improvement by nonconfocal theta microscopy.

    Science.gov (United States)

    Lindek, S; Stelzer, E H

    1999-11-01

    We present a novel scanning fluorescence microscopy technique, nonconfocal theta microscopy (NCTM), that provides almost isotropic resolution. In NCTM, multiphoton absorption from two orthogonal illumination directions is used to induce fluorescence emission. Therefore the point-spread function of the microscope is described by the product of illumination point-spread functions with reduced spatial overlap, which provides the resolution improvement and the more isotropic observation volume. We discuss the technical details of this new method.

  6. Electron energy-loss spectrometry at the frontier of spatial and energy resolution

    International Nuclear Information System (INIS)

    Hofer, F.; Grogger, W.; Kothleitner, G.

    2004-01-01

    Full text: Electron energy-loss spectroscopy (EELS) in the transmission electron microscope (TEM) is now used routinely as a means of measuring chemical and structural properties of very small regions of a thin specimen. The power of this technique depends significantly on two parameters: its spatial resolution and the energy resolution available in the spectrum and in the energy-filtered TEM (EFTEM) image. The cold field emission source and the Schottky emitter have made an energy resolution below 1 eV possible and it is now feasible to obtain data with a spatial resolution close to atomic dimensions, given the right instrumentation and specimen. EFTEM allows to record elemental maps at sub-nanometre resolution, being mainly limited by chromatic and spherical aberration of the objective lens and by delocalization of inelastic scattering. Recently the possibility of correcting spherical and even chromatic aberrations of electron lenses has become a practical reality thus improving the point resolution of the TEM to below 0.1 nm. The other limiting factor for EFTEM resolution is delocalization. However, recent measurements show that resolution values in the range of 1 nm and below can be achieved, even for energy-losses of only a few eV. In terms of energy-resolution, EELS and EFTEM compare less favourably with other spectroscopies. For common TEMs, the overall energy-resolution is mainly determined by the energy width of the electron source, typically between 0.5 and 1.5 eV. For comparison, synchrotron x-ray sources and beam line spectrometers, provide a resolution well below 0.1 eV for absorption spectroscopy. During the early sixties, the energy spread of an electron beam could be reduced by incorporating an energy-filter into the illumination system, but the system lacked spatial resolution. Later developments combined high energy resolution in the range of 0.1 eV with improved spatial resolution. Recently, FEI introduced a new high resolution EELS system based

  7. Point specificity in acupuncture

    Directory of Open Access Journals (Sweden)

    Choi Emma M

    2012-02-01

    Full Text Available Abstract The existence of point specificity in acupuncture is controversial, because many acupuncture studies using this principle to select control points have found that sham acupoints have similar effects to those of verum acupoints. Furthermore, the results of pain-related studies based on visual analogue scales have not supported the concept of point specificity. In contrast, hemodynamic, functional magnetic resonance imaging and neurophysiological studies evaluating the responses to stimulation of multiple points on the body surface have shown that point-specific actions are present. This review article focuses on clinical and laboratory studies supporting the existence of point specificity in acupuncture and also addresses studies that do not support this concept. Further research is needed to elucidate the point-specific actions of acupuncture.

  8. Comprehensive Interpretation of a Three-Point Gauss Quadrature with Variable Sampling Points and Its Application to Integration for Discrete Data

    Directory of Open Access Journals (Sweden)

    Young-Doo Kwon

    2013-01-01

    Full Text Available This study examined the characteristics of a variable three-point Gauss quadrature using a variable set of weighting factors and corresponding optimal sampling points. The major findings were as follows. The one-point, two-point, and three-point Gauss quadratures that adopt the Legendre sampling points and the well-known Simpson’s 1/3 rule were found to be special cases of the variable three-point Gauss quadrature. In addition, the three-point Gauss quadrature may have out-of-domain sampling points beyond the domain end points. By applying the quadratically extrapolated integrals and nonlinearity index, the accuracy of the integration could be increased significantly for evenly acquired data, which is popular with modern sophisticated digital data acquisition systems, without using higher-order extrapolation polynomials.

  9. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  10. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  11. Near-point string: Simple method to demonstrate anticipated near point for multifocal and accommodating intraocular lenses.

    Science.gov (United States)

    George, Monica C; Lazer, Zane P; George, David S

    2016-05-01

    We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Vortex pinning by point defect in superconductors

    International Nuclear Information System (INIS)

    Liao Hongyin; Zhou Shiping; Du Haochen

    2003-01-01

    We apply the periodic time-dependent Ginzburg-Landau model to study vortex distribution in type-II superconductors with a point-like defect and square pinning array. A defect site will pin vortices, and a periodic pinning array with right geometric parameters, which can be any form designed in advance, shapes the vortex pattern as external magnetic field varies. The maximum length over which an attractive interaction between a pinning centre and a vortex extends is estimated to be about 6.0ξ. We also derive spatial distribution expressions for the order parameter, vector potential, magnetic field and supercurrent induced by a point defect. Theoretical results and numerical simulations are compared with each other and they are consistent

  13. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    ; the Cumulated Publication Point Indicator (CPPI), which graphically illustrates the cumulated gain of obtained vs. ideal points, both seen as vectors; and the normalized Cumulated Publication Point Index (nCPPI) that represents the cumulated gain of publication success as index values, either graphically......The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdisciplinary Danish Institute of International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...

  14. Point-to-point radio link variation at E-band and its effect on antenna design

    NARCIS (Netherlands)

    Al-Rawi, A.; Dubok, A.; Herben, M.H.A.J.; Smolders, A.B.

    2015-01-01

    Radio propagation will strongly influence the design of the antenna and front-end components of E-band point-to-point communication systems. Based on the ITU rain model, the rain attenuation is estimated in a statistical sense and it is concluded that for backhaul links of 1–10 km, antennas with a

  15. Analysis of tree stand horizontal structure using random point field methods

    Directory of Open Access Journals (Sweden)

    O. P. Sekretenko

    2015-06-01

    Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.

  16. Tipping Point

    Medline Plus

    Full Text Available ... 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  17. Point Genetics: A New Concept to Assess Neutron Kinetics

    International Nuclear Information System (INIS)

    Klein Meulekamp, R.; Kuijper, J.C.; Schikorr, M.

    2005-01-01

    Point genetic equations are introduced. These equations are similar to the well-known point kinetic equations but characterize and couple individual fission generations in subcritical systems. Point genetic equations are able to describe dynamic behavior of source-driven subcritical systems on shorter timescales than is possible with point kinetic equations. Point genetic parameters can be used as a first-order characterization of the system and can be calculated using standard Monte Carlo techniques; the implementation in other calculational schemes seems straightforward. A Godiva sphere is considered to show the applicability of the point genetic equations in describing a detector response on short timescales. For this system the point genetic parameters are calculated and compared with reference calculations. Typical dynamic source behavior is considered by studying a transient in which the neutron source energy decreases from 20 to 1 MeV. For all cases studied, the point genetic equations are compared to full space-time kinetic solutions, and it is shown that point genetics performs well

  18. Resolution of point sources of light as analyzed by quantum detection theory.

    Science.gov (United States)

    Helstrom, C. W.

    1973-01-01

    The resolvability of point sources of incoherent thermal light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  19. Calibrated HDRI in 3D point clouds

    DEFF Research Database (Denmark)

    Bülow, Katja; Tamke, Martin

    2017-01-01

    the challenges of dynamic smart lighting planning in outdoor urban space. This paper presents findings on how 3D capturing of outdoor environments combined with HDRI establishes a new way for analysing and representing the spatial distribution of light in combination with luminance data.......3D-scanning technologies and point clouds as means for spatial representation introduce a new paradigm to the measuring and mapping of physical artefacts and space. This technology also offers possibilities for the measuring and mapping of outdoor urban lighting and has the potential to meet...

  20. Hybrid kriging methods for interpolating sparse river bathymetry point data

    Directory of Open Access Journals (Sweden)

    Pedro Velloso Gomes Batista

    Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

  1. The succinonitrile triple-point standard: a fixed point to improve the accuracy of temperature measurements in the clinical laboratory.

    Science.gov (United States)

    Mangum, B W

    1983-07-01

    In an investigation of the melting and freezing behavior of succinonitrile, the triple-point temperature was determined to be 58.0805 degrees C, with an estimated uncertainty of +/- 0.0015 degrees C relative to the International Practical Temperature Scale of 1968 (IPTS-68). The triple-point temperature of this material is evaluated as a temperature-fixed point, and some clinical laboratory applications of this fixed point are proposed. In conjunction with the gallium and ice points, the availability of succinonitrile permits thermistor thermometers to be calibrated accurately and easily on the IPTS-68.

  2. Forecasting Global Rainfall for Points Using ECMWF's Global Ensemble and Its Applications in Flood Forecasting

    Science.gov (United States)

    Pillosu, F. M.; Hewson, T.; Mazzetti, C.

    2017-12-01

    Prediction of local extreme rainfall has historically been the remit of nowcasting and high resolution limited area modelling, which represent only limited areas, may not be spatially accurate, give reasonable results only for limited lead times (based statistical post-processing software ("ecPoint-Rainfall, ecPR", operational in 2017) that uses ECMWF Ensemble (ENS) output to deliver global probabilistic rainfall forecasts for points up to day 10. Firstly, ecPR applies a new notion of "remote calibration", which 1) allows us to replicate a multi-centennial training period using only one year of data, and 2) provides forecasts for anywhere in the world. Secondly, the software applies an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals, and of where biases in the model can be improved upon. A long-term verification has shown that the post-processed rainfall has better reliability and resolution at every lead time if compared with ENS, and for large totals, ecPR outputs have the same skill at day 5 that the raw ENS has at day 1 (ROC area metric). ecPR could be used as input for hydrological models if its probabilistic output is modified accordingly to the inputs requirements for hydrological models. Indeed, ecPR does not provide information on where the highest total is likely to occur inside the gridbox, nor on the spatial distribution of rainfall values nearby. "Scenario forecasts" could be a solution. They are derived from locating the rainfall peak in sensitive positions (e.g. urban areas), and then redistributing the remaining quantities in the gridbox modifying traditional spatial correlation characterization methodologies (e.g. variogram analysis) in order to take account, for instance, of the type of rainfall forecast (stratiform, convective). Such an approach could be a turning point in the field of medium-range global real-time riverine flood forecasts. This presentation will

  3. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    Science.gov (United States)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  4. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning

    Science.gov (United States)

    Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.

    2012-07-01

    Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get

  5. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    International Nuclear Information System (INIS)

    Marcori, Oton H.; Pereira, Thiago S.

    2017-01-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  6. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    Energy Technology Data Exchange (ETDEWEB)

    Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br [Departamento de Física, Universidade Estadual de Londrina, 86057-970, Londrina PR (Brazil)

    2017-02-01

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation function in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.

  7. Critical-point nuclei

    International Nuclear Information System (INIS)

    Clark, R.M.

    2004-01-01

    It has been suggested that a change of nuclear shape may be described in terms of a phase transition and that specific nuclei may lie close to the critical point of the transition. Analytical descriptions of such critical-point nuclei have been introduced recently and they are described briefly. The results of extensive searches for possible examples of critical-point behavior are presented. Alternative pictures, such as describing bands in the candidate nuclei using simple ΔK = 0 and ΔK = 2 rotational-coupling models, are discussed, and the limitations of the different approaches highlighted. A possible critical-point description of the transition from a vibrational to rotational pairing phase is suggested

  8. The registration of non-cooperative moving targets laser point cloud in different view point

    Science.gov (United States)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.

  9. [Dancing with Pointe Shoes: Characteristics and Assessment Criteria for Pointe Readiness].

    Science.gov (United States)

    Wanke, Eileen M; Exner-Grave, Elisabeth

    2017-12-01

    Training with pointe shoes is an integral part of professional dance education and ambitious hobby dancing. Pointe shoes - developed more than hundred years ago and almost unaltered since then - are highly specific and strike a balance between aesthetics, function, protection, and health care. Therefore, pointe readiness should be tested prior to all dance training or career training. Medical specialists are often confronted with this issue. Specific anatomical dance technique-orientated general conditional and coordinative preconditions as well as dance-technical prerequisites must be met by pointe readiness tests in order to keep traumatic injuries or long-term damage at a minimum. In addition to a (training) history, medical counselling sessions have come to include various tests that enable a reliable decision for or against pointe work. This article suggests adequate testing procedures (STT TEST), taking account of professional dancing as well as hobby dancing. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  11. Rigorous Line-Based Transformation Model Using the Generalized Point Strategy for the Rectification of High Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Kun Hu

    2016-09-01

    Full Text Available High precision geometric rectification of High Resolution Satellite Imagery (HRSI is the basis of digital mapping and Three-Dimensional (3D modeling. Taking advantage of line features as basic geometric control conditions instead of control points, the Line-Based Transformation Model (LBTM provides a practical and efficient way of image rectification. It is competent to build the mathematical relationship between image space and the corresponding object space accurately, while it reduces the workloads of ground control and feature recognition dramatically. Based on generalization and the analysis of existing LBTMs, a novel rigorous LBTM is proposed in this paper, which can further eliminate the geometric deformation caused by sensor inclination and terrain variation. This improved nonlinear LBTM is constructed based on a generalized point strategy and resolved by least squares overall adjustment. Geo-positioning accuracy experiments with IKONOS, GeoEye-1 and ZiYuan-3 satellite imagery are performed to compare rigorous LBTM with other relevant line-based and point-based transformation models. Both theoretic analysis and experimental results demonstrate that the rigorous LBTM is more accurate and reliable without adding extra ground control. The geo-positioning accuracy of satellite imagery rectified by rigorous LBTM can reach about one pixel with eight control lines and can be further improved by optimizing the horizontal and vertical distribution of control lines.

  12. Point Clouds to Indoor/outdoor Accessibility Diagnosis

    Science.gov (United States)

    Balado, J.; Díaz-Vilariño, L.; Arias, P.; Garrido, I.

    2017-09-01

    This work presents an approach to automatically detect structural floor elements such as steps or ramps in the immediate environment of buildings, elements that may affect the accessibility to buildings. The methodology is based on Mobile Laser Scanner (MLS) point cloud and trajectory information. First, the street is segmented in stretches along the trajectory of the MLS to work in regular spaces. Next, the lower region of each stretch (the ground zone) is selected as the ROI and normal, curvature and tilt are calculated for each point. With this information, points in the ROI are classified in horizontal, inclined or vertical. Points are refined and grouped in structural elements using raster process and connected components in different phases for each type of previously classified points. At last, the trajectory data is used to distinguish between road and sidewalks. Adjacency information is used to classify structural elements in steps, ramps, curbs and curb-ramps. The methodology is tested in a real case study, consisting of 100 m of an urban street. Ground elements are correctly classified in an acceptable computation time. Steps and ramps also are exported to GIS software to enrich building models from Open Street Map with information about accessible/inaccessible entrances and their locations.

  13. Development of an Objective High Spatial Resolution Soil Moisture Index

    Science.gov (United States)

    Zavodsky, B.; Case, J.; White, K.; Bell, J. R.

    2015-12-01

    Drought detection, analysis, and mitigation has become a key challenge for a diverse set of decision makers, including but not limited to operational weather forecasters, climatologists, agricultural interests, and water resource management. One tool that is heavily used is the United States Drought Monitor (USDM), which is derived from a complex blend of objective data and subjective analysis on a state-by-state basis using a variety of modeled and observed precipitation, soil moisture, hydrologic, and vegetation and crop health data. The NASA Short-term Prediction Research and Transition (SPoRT) Center currently runs a real-time configuration of the Noah land surface model (LSM) within the NASA Land Information System (LIS) framework. The LIS-Noah is run at 3-km resolution for local numerical weather prediction (NWP) and situational awareness applications at select NOAA/National Weather Service (NWS) forecast offices over the Continental U.S. (CONUS). To enhance the practicality of the LIS-Noah output for drought monitoring and assessing flood potential, a 30+-year soil moisture climatology has been developed in an attempt to place near real-time soil moisture values in historical context at county- and/or watershed-scale resolutions. This LIS-Noah soil moisture climatology and accompanying anomalies is intended to complement the current suite of operational products, such as the North American Land Data Assimilation System phase 2 (NLDAS-2), which are generated on a coarser-resolution grid that may not capture localized, yet important soil moisture features. Daily soil moisture histograms are used to identify the real-time soil moisture percentiles at each grid point according to the county or watershed in which the grid point resides. Spatial plots are then produced that map the percentiles as proxies to the different USDM categories. This presentation will highlight recent developments of this gridded, objective soil moisture index, comparison to subjective

  14. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Directory of Open Access Journals (Sweden)

    Menard Daniel

    2006-01-01

    Full Text Available Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  15. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Science.gov (United States)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  16. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  17. Material-point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  18. Displacement fields from point cloud data: Application of particle imaging velocimetry to landslide geodesy

    Science.gov (United States)

    Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno

    2012-01-01

    Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.

  19. PowerPoint 2010 Bible

    CERN Document Server

    Wempen, Faithe

    2010-01-01

    Master PowerPoint and improve your presentation skills-with one book!. It's no longer enough to have slide after slide of text, bullets, and charts. It's not even enough to have good speaking skills if your PowerPoint slides bore your audience. Get the very most out of all that PowerPoint 2010 has to offer while also learning priceless tips and techniques for making good presentations in this new PowerPoint 2010 Bible. Well-known PowerPoint expert and author Faithe Wempen provides formatting tips; shows you how to work with drawings, tables, and SmartArt; introduces new collaboration tools; wa

  20. [A landscape ecological approach for urban non-point source pollution control].

    Science.gov (United States)

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  1. A three-point Taylor algorithm for three-point boundary value problems

    NARCIS (Netherlands)

    J.L. López; E. Pérez Sinusía; N.M. Temme (Nico)

    2011-01-01

    textabstractWe consider second-order linear differential equations $\\varphi(x)y''+f(x)y'+g(x)y=h(x)$ in the interval $(-1,1)$ with Dirichlet, Neumann or mixed Dirichlet-Neumann boundary conditions given at three points of the interval: the two extreme points $x=\\pm 1$ and an interior point

  2. Spatial interpolation of hourly precipitation and dew point temperature for the identification of precipitation phase and hydrologic response in a mountainous catchment

    Science.gov (United States)

    Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.

    2012-12-01

    In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing

  3. A semi-analytical stationary model of a point-to-plane corona discharge

    International Nuclear Information System (INIS)

    Yanallah, K; Pontiga, F

    2012-01-01

    A semi-analytical model of a dc corona discharge is formulated to determine the spatial distribution of charged particles (electrons, negative ions and positive ions) and the electric field in pure oxygen using a point-to-plane electrode system. A key point in the modeling is the integration of Gauss' law and the continuity equation of charged species along the electric field lines, and the use of Warburg's law and the corona current–voltage characteristics as input data in the boundary conditions. The electric field distribution predicted by the model is compared with the numerical solution obtained using a finite-element technique. The semi-analytical solutions are obtained at a negligible computational cost, and provide useful information to characterize and control the corona discharge in different technological applications. (paper)

  4. Laser Dew-Point Hygrometer

    Science.gov (United States)

    Matsumoto, Shigeaki; Toyooka, Satoru

    1995-01-01

    A rough-surface-type automatic dew-point hygrometer was developed using a laser diode and an optical fiber cable. A gold plate with 0.8 µ m average surface roughness was used as a surface for deposition of dew to facilitate dew deposition and prevent supersaturation of water vapor at the dew point. It was shown experimentally that the quantity of dew deposited can be controlled to be constant at any predetermined level, and is independent of the dew point to be measured. The dew points were measured in the range from -15° C to 54° C in which the temperature ranged from 0° C to 60° C. The measurement error of the dew point was ±0.5° C which was equal to below ±2% in relative humidity in the above dew-point range.

  5. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  6. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  7. Isotope specific resolution recovery image reconstruction in high resolution PET imaging.

    Science.gov (United States)

    Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib

    2014-05-01

    Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The

  8. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  9. Influence of the burning point on the dew point in a diesel engine

    Energy Technology Data Exchange (ETDEWEB)

    Teetz, C.

    1982-06-01

    A computation on the influence of the ignition point on the dew point in a cylinder of a diesel engine is presented. The cylinder-pressure diagrams are shown. The results of computation are given. A later ignition point diminishes the area with cylinder wall temperatures below the dew point. The endangering by cylinder wall temperatures below the dew point is illustrated.

  10. TeleHealth networks: Instant messaging and point-to-point communication over the internet

    Energy Technology Data Exchange (ETDEWEB)

    Sachpazidis, Ilias [Fraunhofer Institute for Computer Graphics, Fraunhoferstr. 5, D-64283, Darmstadt (Germany)]. E-mail: Ilias.Sachpazidis@igd.fraunhofer.de; Ohl, Roland [MedCom Gesellschaft fuer medizinische Bildverarbeitung mbH, Runderturmstr. 12, D-64283, Darmstadt (Germany); Kontaxakis, George [Universidad Politecnica de Madrid, ETSI Telecomunicacion, Madrid 28040 (Spain); Sakas, Georgios [Fraunhofer Institute for Computer Graphics, Fraunhoferstr. 5, D-64283, Darmstadt (Germany)

    2006-12-20

    This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber.

  11. TeleHealth networks: Instant messaging and point-to-point communication over the internet

    International Nuclear Information System (INIS)

    Sachpazidis, Ilias; Ohl, Roland; Kontaxakis, George; Sakas, Georgios

    2006-01-01

    This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber

  12. TeleHealth networks: Instant messaging and point-to-point communication over the internet

    Science.gov (United States)

    Sachpazidis, Ilias; Ohl, Roland; Kontaxakis, George; Sakas, Georgios

    2006-12-01

    This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber.

  13. Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method

    Science.gov (United States)

    Huth, A.; Smith, B. E.

    2017-12-01

    Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store

  14. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  15. The spatial resolution of epidemic peaks.

    Directory of Open Access Journals (Sweden)

    Harriet L Mills

    2014-04-01

    Full Text Available The emergence of novel respiratory pathogens can challenge the capacity of key health care resources, such as intensive care units, that are constrained to serve only specific geographical populations. An ability to predict the magnitude and timing of peak incidence at the scale of a single large population would help to accurately assess the value of interventions designed to reduce that peak. However, current disease-dynamic theory does not provide a clear understanding of the relationship between: epidemic trajectories at the scale of interest (e.g. city; population mobility; and higher resolution spatial effects (e.g. transmission within small neighbourhoods. Here, we used a spatially-explicit stochastic meta-population model of arbitrary spatial resolution to determine the effect of resolution on model-derived epidemic trajectories. We simulated an influenza-like pathogen spreading across theoretical and actual population densities and varied our assumptions about mobility using Latin-Hypercube sampling. Even though, by design, cumulative attack rates were the same for all resolutions and mobilities, peak incidences were different. Clear thresholds existed for all tested populations, such that models with resolutions lower than the threshold substantially overestimated population-wide peak incidence. The effect of resolution was most important in populations which were of lower density and lower mobility. With the expectation of accurate spatial incidence datasets in the near future, our objective was to provide a framework for how to use these data correctly in a spatial meta-population model. Our results suggest that there is a fundamental spatial resolution for any pathogen-population pair. If underlying interactions between pathogens and spatially heterogeneous populations are represented at this resolution or higher, accurate predictions of peak incidence for city-scale epidemics are feasible.

  16. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    Science.gov (United States)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  17. Theoretical limit of spatial resolution in diffuse optical tomography using a perturbation model

    International Nuclear Information System (INIS)

    Konovalov, A B; Vlasov, V V

    2014-01-01

    We have assessed the limit of spatial resolution of timedomain diffuse optical tomography (DOT) based on a perturbation reconstruction model. From the viewpoint of the structure reconstruction accuracy, three different approaches to solving the inverse DOT problem are compared. The first approach involves reconstruction of diffuse tomograms from straight lines, the second – from average curvilinear trajectories of photons and the third – from total banana-shaped distributions of photon trajectories. In order to obtain estimates of resolution, we have derived analytical expressions for the point spread function and modulation transfer function, as well as have performed a numerical experiment on reconstruction of rectangular scattering objects with circular absorbing inhomogeneities. It is shown that in passing from reconstruction from straight lines to reconstruction using distributions of photon trajectories we can improve resolution by almost an order of magnitude and exceed the accuracy of reconstruction of multi-step algorithms used in DOT. (optical tomography)

  18. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  19. A REST Service for Triangulation of Point Sets Using Oriented Matroids

    Directory of Open Access Journals (Sweden)

    José Antonio Valero Medina

    2014-05-01

    Full Text Available This paper describes the implementation of a prototype REST service for triangulation of point sets collected by mobile GPS receivers. The first objective of this paper is to test functionalities of an application, which exploits mobile devices’ capabilities to get data associated with their spatial location. A triangulation of a set of points provides a mechanism through which it is possible to produce an accurate representation of spatial data. Such triangulation may be used for representing surfaces by Triangulated Irregular Networks (TINs, and for decomposing complex two-dimensional spatial objects into simpler geometries. The second objective of this paper is to promote the use of oriented matroids for finding alternative solutions to spatial data processing and analysis tasks. This study focused on the particular case of the calculation of triangulations based on oriented matroids. The prototype described in this paper used a wrapper to integrate and expose several tools previously implemented in C++.

  20. Determination of the impact of RGB points cloud attribute quality on color-based segmentation process

    Directory of Open Access Journals (Sweden)

    Bartłomiej Kraszewski

    2015-06-01

    Full Text Available The article presents the results of research on the effect that radiometric quality of point cloud RGB attributes have on color-based segmentation. In the research, a point cloud with a resolution of 5 mm, received from FAROARO Photon 120 scanner, described the fragment of an office’s room and color images were taken by various digital cameras. The images were acquired by SLR Nikon D3X, and SLR Canon D200 integrated with the laser scanner, compact camera Panasonic TZ-30 and a mobile phone digital camera. Color information from images was spatially related to point cloud in FAROARO Scene software. The color-based segmentation of testing data was performed with the use of a developed application named “RGB Segmentation”. The application was based on public Point Cloud Libraries (PCL and allowed to extract subsets of points fulfilling the criteria of segmentation from the source point cloud using region growing method.Using the developed application, the segmentation of four tested point clouds containing different RGB attributes from various images was performed. Evaluation of segmentation process was performed based on comparison of segments acquired using the developed application and extracted manually by an operator. The following items were compared: the number of obtained segments, the number of correctly identified objects and the correctness of segmentation process. The best correctness of segmentation and most identified objects were obtained using the data with RGB attribute from Nikon D3X images. Based on the results it was found that quality of RGB attributes of point cloud had impact only on the number of identified objects. In case of correctness of the segmentation, as well as its error no apparent relationship between the quality of color information and the result of the process was found.[b]Keywords[/b]: terrestrial laser scanning, color-based segmentation, RGB attribute, region growing method, digital images, points cloud

  1. Critical points for finite Fibonacci chains of point delta-interactions and orthogonal polynomials

    International Nuclear Information System (INIS)

    De Prunele, E

    2011-01-01

    For a one-dimensional Schroedinger operator with a finite number n of point delta-interactions with a common intensity, the parameters are the intensity, the n - 1 intercenter distances and the mass. Critical points are points in the parameters space of the Hamiltonian where one bound state appears or disappears. The study of critical points for Hamiltonians with point delta-interactions arranged along a Fibonacci chain is shown to be closely related to the study of the so-called Fibonacci operator, a discrete one-dimensional Schroedinger-type operator, which occurs in the context of tight binding Hamiltonians. These critical points are the zeros of orthogonal polynomials previously studied in the context of special diatomic linear chains with elastic nearest-neighbor interaction. Properties of the zeros (location, asymptotic behavior, gaps, ...) are investigated. The perturbation series from the solvable periodic case is determined. The measure which yields orthogonality is investigated numerically from the zeros. It is shown that the transmission coefficient at zero energy can be expressed in terms of the orthogonal polynomials and their associated polynomials. In particular, it is shown that when the number of point delta-interactions is equal to a Fibonacci number minus 1, i.e. when the intervals between point delta-interactions form a palindrome, all the Fibonacci chains at critical points are completely transparent at zero energy. (paper)

  2. ["Point by point" approach to structure-function correlation of glaucoma on the ganglion cell complex in the posterior pole].

    Science.gov (United States)

    Zeitoun, M

    2017-01-01

    To try to establish a "point by point" relationship between the local thickness of the retinal ganglion cell complex and its sensitivity. In total, 104 glaucomatous eyes of 89 patients with a confirmed 24-2 visual field, were measured by superimposing the visual field, using imaging software, with the Wide 40° by 30° measurements of retinal ganglion cell complex obtained from the Topcon © 3D 2000 OCT, after upward adjustment, inversion and scaling. Visual fields were classified into two groups according to the extent of the disease: 58 mild to moderate (MD up to -12dB), and 46 severe (MD beyond -12dB). The 6mm by 6mm central region, equipped with a normative database, was studied, corresponding to 16 points in the visual field. These points were individually matched one by one to the local ganglion cell complex, which was classified into 2 groups depending on whether it was greater or less than 70 microns. The normative database confirmed the pathological nature of the thin areas, with a significance of 95 to 99%. Displacement of central retinal ganglion cells was compensated for. Of 1664 points (16 central points for 104 eyes), 283 points were found to be "borderline" and excluded. Of the 1381 analyzed points, 727 points were classified as "over 70 microns" and 654 points "under 70 microns". (1) For all stages combined, 85.8% of the 727 points which were greater than 70 microns had a deviation between -3 and +3dB: areas above 70 microns had no observable loss of light sensitivity. (2) In total, 92.5% of the 428 points having a gap ranging from -6 to -35dB were located on ganglion cell complex areas below 70 microns: functional visual loss was identified in thin areas, which were less than 70 microns. (3) Areas which were less than 70 microns, that is 654 points, had quite variable sensitivity and can be divided into three groups: the first with preserved sensitivity, another with obliterated sensitivity, and an intermediate group connecting

  3. Pairwise contact energy statistical potentials can help to find probability of point mutations.

    Science.gov (United States)

    Saravanan, K M; Suvaithenamudhan, S; Parthasarathy, S; Selvaraj, S

    2017-01-01

    To adopt a particular fold, a protein requires several interactions between its amino acid residues. The energetic contribution of these residue-residue interactions can be approximated by extracting statistical potentials from known high resolution structures. Several methods based on statistical potentials extracted from unrelated proteins are found to make a better prediction of probability of point mutations. We postulate that the statistical potentials extracted from known structures of similar folds with varying sequence identity can be a powerful tool to examine probability of point mutation. By keeping this in mind, we have derived pairwise residue and atomic contact energy potentials for the different functional families that adopt the (α/β) 8 TIM-Barrel fold. We carried out computational point mutations at various conserved residue positions in yeast Triose phosphate isomerase enzyme for which experimental results are already reported. We have also performed molecular dynamics simulations on a subset of point mutants to make a comparative study. The difference in pairwise residue and atomic contact energy of wildtype and various point mutations reveals probability of mutations at a particular position. Interestingly, we found that our computational prediction agrees with the experimental studies of Silverman et al. (Proc Natl Acad Sci 2001;98:3092-3097) and perform better prediction than i Mutant and Cologne University Protein Stability Analysis Tool. The present work thus suggests deriving pairwise contact energy potentials and molecular dynamics simulations of functionally important folds could help us to predict probability of point mutations which may ultimately reduce the time and cost of mutation experiments. Proteins 2016; 85:54-64. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. An ArcGIS approach to include tectonic structures in point data regionalization.

    Science.gov (United States)

    Darsow, Andreas; Schafmeister, Maria-Theresia; Hofmann, Thilo

    2009-01-01

    Point data derived from drilling logs must often be regionalized. However, aquifers may show discontinuous surface structures, such as the offset of an aquitard caused by tectonic faults. One main challenge has been to incorporate these structures into the regionalization process of point data. We combined ordinary kriging and inverse distance weighted (IDW) interpolation to account for neotectonic structures in the regionalization process. The study area chosen to test this approach is the largest porous aquifer in Austria. It consists of three basins formed by neotectonic events and delimited by steep faults with a vertical offset of the aquitard up to 70 m within very short distances. First, ordinary kriging was used to incorporate the characteristic spatial variability of the aquitard location by means of a variogram. The tectonic faults could be included into the regionalization process by using breaklines with buffer zones. All data points inside the buffer were deleted. Last, IDW was performed, resulting in an aquitard map representing the discontinuous surface structures. This approach enables one to account for such surfaces using the standard software package ArcGIS; therefore, it could be adopted in many practical applications.

  5. Introducing Nine-Point Circle to Junior High School Students

    Science.gov (United States)

    Fiangga, S.; Azizah, M. A. N.; Rini, R. N. K.; Hidayanti, A. N.

    2018-01-01

    The concept of circles is an ancient concept that has appeared since Ancient Egypt from which this concept gives many significant contributions in mathematics’ development until now. Nevertheless, the concept of circles hides many uncover mysterious features that are of applications in mathematics. One of the mysterious features is the Nine-Point Circle. This Nine-point circle is also known as Euler’s circle, six-point circle, Feuerbach’s circle, the twelve-point circle, and many others. Because of these different names, there have been misunderstand among mathematicians about the Nine-Point Circle’s history. Besides, the discussion of Nine-Point Circle can be used to be an initial material to explain elementary geometry topic in junior high school’s level curriculum of 2013. Therefore, this concept needs to be delivered to the students as a geometry introduction. A possible form of the integration historical aspect of Nine-point circle is suggested in this paper as well as its importance in the curriculum of 2013.

  6. Current singularities at finitely compressible three-dimensional magnetic null points

    International Nuclear Information System (INIS)

    Pontin, D.I.; Craig, I.J.D.

    2005-01-01

    The formation of current singularities at line-tied two- and three-dimensional (2D and 3D, respectively) magnetic null points in a nonresistive magnetohydrodynamic environment is explored. It is shown that, despite the different separatrix structures of 2D and 3D null points, current singularities may be initiated in a formally equivalent manner. This is true no matter whether the collapse is triggered by flux imbalance within closed, line-tied null points or driven by externally imposed velocity fields in open, incompressible geometries. A Lagrangian numerical code is used to investigate the finite amplitude perturbations that lead to singular current sheets in collapsing 2D and 3D null points. The form of the singular current distribution is analyzed as a function of the spatial anisotropy of the null point, and the effects of finite gas pressure are quantified. It is pointed out that the pressure force, while never stopping the formation of the singularity, significantly alters the morphology of the current distribution as well as dramatically weakening its strength. The impact of these findings on 2D and 3D magnetic reconnection models is discussed

  7. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    Science.gov (United States)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  8. Pointing Verification Method for Spaceborne Lidars

    Directory of Open Access Journals (Sweden)

    Axel Amediek

    2017-01-01

    Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.

  9. Professional SharePoint 2010 Development

    CERN Document Server

    Rizzo, Tom; Fried, Jeff

    2010-01-01

    Learn to leverage the features of the newest version of SharePoint, in this update to the bestseller. More than simply a portal, SharePoint is Microsoft's popular content management solution for building intranets and Web sites or hosting wikis and blogs. Offering broad coverage on all aspects of development for the SharePoint platform, this comprehensive book shows you exactly what SharePoint does, how to build solutions, and what features are accessible within SharePoint. Written by one of the most recognized names in SharePoint development, Professional SharePoint 2010 Development offers an

  10. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  11. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  12. Learning power point 2000 easily

    Energy Technology Data Exchange (ETDEWEB)

    Mon, In Su; Je, Jung Suk

    2000-05-15

    This book introduces power point 2000, which gives descriptions what power point is, what we can do with power point 2000, is it possible to install power point 2000 in my computer? Let's run power point, basic of power point such as new presentation, writing letter, using text box, changing font size, color and shape, catching power user, insertion of word art and creating of new file. It also deals with figure, chart, graph, making multimedia file, presentation, know-how of power point for teachers and company workers.

  13. [Guidelines for the management of point-of care testing nonconformities according to the EN ISO 22870].

    Science.gov (United States)

    Houlbert, C; Annaix, V; Szymanowicz, A; Vassault, A; Guimont, M C; Pernet, P

    2012-02-01

    In this paper, guidelines are proposed to fulfill the requirements of EN ISO 22870 standard regarding the management of point-of-care testing (POCT) nonconformities. In the first part, the main nonconformities that may affect POCT are given, the means for resolution and the control of adverse events are proposed. In the second part, we propose recommendations in case of unavailability of a point-of-care testing device from the occurring of the adverse event, to the restarting of the device.

  14. California State Waters Map Series-Offshore of Point Reyes, California

    Science.gov (United States)

    Watt, Janet T.; Dartnell, Peter; Golden, Nadine E.; Greene, H. Gary; Erdey, Mercedes D.; Cochrane, Guy R.; Johnson, Samuel Y.; Hartwell, Stephen R.; Kvitek, Rikk G.; Manson, Michael W.; Endris, Charles A.; Dieter, Bryan E.; Sliter, Ray W.; Krigsman, Lisa M.; Lowe, Erik; Chinn, John L.; Watt, Janet T.; Cochran, Susan A.

    2015-01-01

    This publication about the Offshore of Point Reyes map area includes ten map sheets that contain explanatory text, in addition to this descriptive pamphlet and a data catalog of geographic information system (GIS) files. Sheets 1, 2, and 3 combine data from four different sonar surveys to generate comprehensive high-resolution bathymetry and acoustic-backscatter coverage of the map area. These data reveal a range of physiographic features (highlighted in the perspective views on sheet 4) such as the flat, sediment-covered seafloor in Drakes Bay, as well as abundant “scour depressions” on the Bodega Head–Tomales Point shelf (see sheet 9) and local, tectonically controlled bedrock uplifts. To validate geological and biological interpretations of the sonar data shown in sheets 1, 2, and 3, the U.S. Geological Survey towed a camera sled over specific offshore locations, collecting both video and photographic imagery; these “ground-truth” surveying data are summarized on sheet 6. Sheet 5 is a “seafloor character” map, which classifies the seafloor on the basis of depth, slope, rugosity (ruggedness), and backscatter intensity and which is further informed by the ground-truth-survey imagery. Sheet 7 is a map of “potential habitats,” which are delineated on the basis of substrate type, geomorphology, seafloor process, or other attributes that may provide a habitat for a specific species or assemblage of organisms. Sheet 8 compiles representative seismic-reflection profiles from the map area, providing information on the subsurface stratigraphy and structure of the map area. Sheet 9 shows the distribution and thickness of young sediment (deposited over the last about 21,000 years, during the most recent sea-level rise) in both the map area and the larger Salt Point to Drakes Bay region, interpreted on the basis of the seismic-reflection data, and it identifies the Offshore of Point Reyes map area as lying within the Bodega Head–Tomales Point shelf, Point

  15. ANALYSIS, THEMATIC MAPS AND DATA MINING FROM POINT CLOUD TO ONTOLOGY FOR SOFTWARE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    R. Nespeca

    2016-06-01

    Full Text Available The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.

  16. Stochastic transformation of points in polygons according to the Voronoi tessellation: microstructural description.

    Science.gov (United States)

    Di Vito, Alessia; Fanfoni, Massimo; Tomellini, Massimo

    2010-12-01

    Starting from a stochastic two-dimensional process we studied the transformation of points in disks and squares following a protocol according to which at any step the island size increases proportionally to the corresponding Voronoi tessera. Two interaction mechanisms among islands have been dealt with: coalescence and impingement. We studied the evolution of the island density and of the island size distribution functions, in dependence on island collision mechanisms for both Poissonian and correlated spatial distributions of points. The island size distribution functions have been found to be invariant with the fraction of transformed phase for a given stochastic process. The n(Θ) curve describing the island decay has been found to be independent of the shape (apart from high correlation degrees) and interaction mechanism.

  17. The Lagrangian Points

    Science.gov (United States)

    Linton, J. Oliver

    2017-01-01

    There are five unique points in a star/planet system where a satellite can be placed whose orbital period is equal to that of the planet. Simple methods for calculating the positions of these points, or at least justifying their existence, are developed.

  18. Analysis of Multicomponent Adsorption Close to a Dew Point

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Stenby, Erling Halfdan

    1998-01-01

    We develop the potential theory of multicomponent adsorption close to a dew point. The approach is based on an asymptotic adsorption equation (AAE) which is valid in a vicinity of the dew point. By this equation the thickness of the liquid film is expressed through thermodynamic characteristics...... and the direct calculations, even if the mixture is not close to a dew point.Key Words: adsorption; potential theory; multicomponent; dew point....

  19. Frame Filtering and Skipping for Point Cloud Data Video Transmission

    Directory of Open Access Journals (Sweden)

    Carlos Moreno

    2017-01-01

    Full Text Available Sensors for collecting 3D spatial data from the real world are becoming more important. They are a prime research area topic and have applications in consumer markets, such as medical, entertainment, and robotics. However, a primary concern with collecting this data is the vast amount of information being generated, and thus, needing to be processed before being transmitted. To address the issue, we propose the use of filtering methods and frame skipping. To collect the 3D spatial data, called point clouds, we used the Microsoft Kinect sensor. In addition, we utilized the Point Cloud Library to process and filter the data being generated by the Kinect. Two different computers were used: a client which collects, filters, and transmits the point clouds; and a server that receives and visualizes the point clouds. The client is also checking for similarity in consecutive frames, skipping those that reach a similarity threshold. In order to compare the filtering methods and test the effectiveness of the frame skipping technique, quality of service (QoS metrics such as frame rate and percentage of filter were introduced. These metrics indicate how well a certain combination of filtering method and frame skipping accomplishes the goal of transmitting point clouds from one location to another. We found that the pass through filter in conjunction with frame skipping provides the best relative QoS. However, results also show that there is still too much data for a satisfactory QoS. For a real-time system to provide reasonable end-to-end quality, dynamic compression and progressive transmission need to be utilized.

  20. Constraints from conformal symmetry on the three point scalar correlator in inflation

    International Nuclear Information System (INIS)

    Kundu, Nilay; Shukla, Ashish; Trivedi, Sandip P.

    2015-01-01

    Using symmetry considerations, we derive Ward identities which relate the three point function of scalar perturbations produced during inflation to the scalar four point function, in a particular limit. The derivation assumes approximate conformal invariance, and the conditions for the slow roll approximation, but is otherwise model independent. The Ward identities allow us to deduce that the three point function must be suppressed in general, being of the same order of magnitude as in the slow roll model. They also fix the three point function in terms of the four point function, upto one constant which we argue is generically suppressed. Our approach is based on analyzing the wave function of the universe, and the Ward identities arise by imposing the requirements of spatial and time reparametrization invariance on it.

  1. SharePoint 2010 For Dummies

    CERN Document Server

    Williams, Vanessa L

    2012-01-01

    Here's the bestselling guide on SharePoint 2010, updated to cover Office 365 SharePoint Portal Server is an essential part of the enterprise infrastructure for many businesses. The Office 365 version includes significantly enhanced cloud capabilities. This second edition of the bestselling guide to SharePoint covers getting a SharePoint site up and running, branded, populated with content, and more. It explains ongoing site management and offers plenty of advice for administrators who want to leverage SharePoint and Office 365 in various ways.Many businesses today rely on SharePoint Portal Ser

  2. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  3. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  4. Professional SharePoint 2010 Development

    CERN Document Server

    Rizzo, Tom; Fried, Jeff; Swider, Paul J; Hillier, Scot; Schaefer, Kenneth

    2012-01-01

    Updated guidance on how to take advantage of the newest features of SharePoint programmability More than simply a portal, SharePoint is Microsoft's popular content management solution for building intranets and websites or hosting wikis and blogs. Offering broad coverage on all aspects of development for the SharePoint platform, this comprehensive book shows you exactly what SharePoint does, how to build solutions, and what features are accessible within SharePoint. Written by a team of SharePoint experts, this new edition offers an extensive selection of field-tested best practices that shows

  5. Triple-Frequency GPS Precise Point Positioning Ambiguity Resolution Using Dual-Frequency Based IGS Precise Clock Products

    Directory of Open Access Journals (Sweden)

    Fei Liu

    2017-01-01

    Full Text Available With the availability of the third civil signal in the Global Positioning System, triple-frequency Precise Point Positioning ambiguity resolution methods have drawn increasing attention due to significantly reduced convergence time. However, the corresponding triple-frequency based precise clock products are not widely available and adopted by applications. Currently, most precise products are generated based on ionosphere-free combination of dual-frequency L1/L2 signals, which however are not consistent with the triple-frequency ionosphere-free carrier-phase measurements, resulting in inaccurate positioning and unstable float ambiguities. In this study, a GPS triple-frequency PPP ambiguity resolution method is developed using the widely used dual-frequency based clock products. In this method, the interfrequency clock biases between the triple-frequency and dual-frequency ionosphere-free carrier-phase measurements are first estimated and then applied to triple-frequency ionosphere-free carrier-phase measurements to obtain stable float ambiguities. After this, the wide-lane L2/L5 and wide-lane L1/L2 integer property of ambiguities are recovered by estimating the satellite fractional cycle biases. A test using a sparse network is conducted to verify the effectiveness of the method. The results show that the ambiguity resolution can be achieved in minutes even tens of seconds and the positioning accuracy is in decimeter level.

  6. The goal of ape pointing.

    Science.gov (United States)

    Halina, Marta; Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.

  7. The resolution of point sources of light as analyzed by quantum detection theory

    Science.gov (United States)

    Helstrom, C. W.

    1972-01-01

    The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  8. Multispectral Image Feature Points

    Directory of Open Access Journals (Sweden)

    Cristhian Aguilera

    2012-09-01

    Full Text Available This paper presents a novel feature point descriptor for the multispectral image case: Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

  9. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    NARCIS (Netherlands)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to

  10. Enhancement of spatial resolution of terahertz imaging systems based on terajet generation by dielectric cube

    Directory of Open Access Journals (Sweden)

    Hai Huy Nguyen Pham

    2017-05-01

    Full Text Available The terahertz (THz, 0.1–10 THz region has been attracting tremendous research interest owing to its potential in practical applications such as biomedical, material inspection, and nondestructive imaging. Those applications require enhancing the spatial resolution at a specific frequency of interest. A variety of resolution-enhancement techniques have been proposed, such as near-field scanning probes, surface plasmons, and aspheric lenses. Here, we demonstrate for the first time that a mesoscale dielectric cube can be exploited as a novel resolution enhancer by simply placing it at the focused imaging point of a continuous wave THz imaging system. The operating principle of this enhancer is based on the generation—by the dielectric cuboid—of the so-called terajet, a photonic jet in the THz region. A subwavelength hotspot is obtained by placing a Teflon cube, with a 1.46 refractive index, at the imaging point of the imaging system, regardless of the numerical aperture (NA. The generated terajet at 125 GHz is experimentally characterized, using our unique THz-wave visualization system. The full width at half maximum (FWHM of the hotspot obtained by placing the enhancer at the focal point of a mirror with a measured NA of 0.55 is approximately 0.55λ, which is even better than the FWHM obtained by a conventional focusing device with the ideal maximum numerical aperture (NA = 1 in air. Nondestructive subwavelength-resolution imaging demonstrations of a Suica integrated circuit card, which is used as a common fare card for trains in Japan, and an aluminum plate with 0.63λ trenches are presented. The amplitude and phase images obtained with the enhancer at 125 GHz can clearly resolve both the air-trenches on the aluminum plate and the card’s inner electronic circuitry, whereas the images obtained without the enhancer are blurred because of insufficient resolution. An increase of the image contrast by a factor of 4.4 was also obtained using

  11. The timing of control signals underlying fast point-to-point arm movements.

    Science.gov (United States)

    Ghafouri, M; Feldman, A G

    2001-04-01

    It is known that proprioceptive feedback induces muscle activation when the facilitation of appropriate motoneurons exceeds their threshold. In the suprathreshold range, the muscle-reflex system produces torques depending on the position and velocity of the joint segment(s) that the muscle spans. The static component of the torque-position relationship is referred to as the invariant characteristic (IC). According to the equilibrium-point (EP) hypothesis, control systems produce movements by changing the activation thresholds and thus shifting the IC of the appropriate muscles in joint space. This control process upsets the balance between muscle and external torques at the initial limb configuration and, to regain the balance, the limb is forced to establish a new configuration or, if the movement is prevented, a new level of static torques. Taken together, the joint angles and the muscle torques generated at an equilibrium configuration define a single variable called the EP. Thus by shifting the IC, control systems reset the EP. Muscle activation and movement emerge following the EP resetting because of the natural physical tendency of the system to reach equilibrium. Empirical and simulation studies support the notion that the control IC shifts and the resulting EP shifts underlying fast point-to-point arm movements are gradual rather than step-like. However, controversies exist about the duration of these shifts. Some studies suggest that the IC shifts cease with the movement offset. Other studies propose that the IC shifts end early in comparison to the movement duration (approximately, at peak velocity). The purpose of this study was to evaluate the duration of the IC shifts underlying fast point-to-point arm movements. Subjects made fast (hand peak velocity about 1.3 m/s) planar arm movements toward different targets while grasping a handle. Hand forces applied to the handle and shoulder/elbow torques were, respectively, measured from a force sensor placed

  12. Microcurrent Point Stimulation Applied to Lower Back Acupuncture Points for the Treatment of Nonspecific Neck Pain.

    Science.gov (United States)

    Armstrong, Kelly; Gokal, Raman; Chevalier, Antoine; Todorsky, William; Lim, Mike

    2017-04-01

    Although acupuncture and microcurrent are widely used for chronic pain, there remains considerable controversy as to their therapeutic value for neck pain. We aimed to determine the effect size of microcurrent applied to lower back acupuncture points to assess the impact on the neck pain. This was a cohort analysis of treatment outcomes pre- and postmicrocurrent stimulation, involving 34 patients with a history of nonspecific chronic neck pain. Consenting patients were enrolled from a group of therapists attending educational seminars and were asked to report pain levels pre-post and 48 hours after a single MPS application. Direct current microcurrent point stimulation (MPS) applied to standardized lower back acupuncture protocol points was used. Evaluations entailed a baseline visual analog scale (VAS) pain scale assessment, using a VAS, which was repeated twice after therapy, once immediately postelectrotherapy and again after a 48-h follow-up period. All 34 patients received a single MPS session. Results were analyzed using paired t tests. Results and Outcomes: Pain intensity showed an initial statistically significant reduction of 68% [3.9050 points; 95% CI (2.9480, 3.9050); p = 0.0001], in mean neck pain levels after standard protocol treatment, when compared to initial pain levels. There was a further statistically significant reduction of 35% in mean neck pain levels at 48 h when compared to pain levels immediately after standard protocol treatment [0.5588 points; 95% CI (0.2001, 0.9176); p = 0.03], for a total average pain relief of 80%. The positive results in this study could have applications for those patients impacted by chronic neck pain.

  13. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    Science.gov (United States)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    points (i) at a maximum distance (ii) around each core-point. Under this condition, seed points are said to be density-reachable by a core point delimiting a cluster around it. A chain of intermediate seed-points can connect contiguous clusters allowing clusters of arbitrary shape to be defined. The novelty of the proposed approach consists in the implementation of the DBSCAN 3D-module, where the xyz-coordinates identify each point and the density of points within a sphere is considered. This allows detecting volumetric features with a higher accuracy, depending only on actual sampling resolution. The approach is truly 3D and exploits all TLS measurements without the need of interpolation or data reduction. Using this method, enhanced geomorphological activity during the summer of 2015 in respect to the previous two years was observed. We attribute this result to the exceptionally high temperatures of that summer, which we deem responsible for accelerating the melting process at the rock glacier front and probably also increasing creep velocities. References: - Tonini, M. and Abellan, A. (2014). Rockfall detection from terrestrial LiDAR point clouds: A clustering approach using R. Journal of Spatial Information Sciences. Number 8, pp95-110 - Hennig, C. Package fpc: Flexible procedures for clustering. https://cran.r-project.org/web/packages/fpc/index.html, 2015. Accessed 2016-01-12.

  14. Kite aerial photography for low-cost, ultra-high spatial resolution multi-spectral mapping of intertidal landscapes.

    Directory of Open Access Journals (Sweden)

    Mitch Bryson

    Full Text Available Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae and animal (e.g. gastropods assemblages at multiple spatial and temporal scales.

  15. Kite aerial photography for low-cost, ultra-high spatial resolution multi-spectral mapping of intertidal landscapes.

    Science.gov (United States)

    Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J; Bongiorno, Daniel

    2013-01-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.

  16. SharePoint 2010 Field Guide

    CERN Document Server

    Mann, Steven; Gazmuri, Pablo; Caravajal, Steve; Wheeler, Christina

    2012-01-01

    Hands-on solutions for common SharePoint 2010 challenges Aimed at the more than 100 million licensed SharePoint 2010 users, this indispensable field guide addresses an abundance of common SharePoint 2010 problems and offers proven solutions. A team of authors encourages you to customize SharePoint beyond the out-of-the-box functionality so that you can build more complex solutions to these challenges. You?ll discover intricate details and specific full-scale solutions that you can then implement to your own SharePoint 2010 solutions.Tackles a variety of SharePoint 2010 problems ranging from si

  17. Professional SharePoint 2010 Administration

    CERN Document Server

    Klindt, Todd; Caravajal, Steve

    2010-01-01

    Thorough coverage of the improvements and changes to SharePoint 2010. SharePoint 2010 boasts a variety of incredible new features that will challenge even the most experienced administrator who is upgrading from SharePoint 2007. Written by a team of SharePoint experts, this book places a takes aim at showing you how to make these new features work right for you. Offering an in-depth look at SharePoint 2010, the authors focus on how SharePoint functionality has changed from its earliest version to its newest, and they provide you with detailed coverage of all the new features and capabilities.:

  18. I See Your Point: Infants under 12 Months Understand that Pointing Is Communicative

    Science.gov (United States)

    Krehm, Madelaine; Onishi, Kristine H.; Vouloumanos, Athena

    2014-01-01

    Do young infants understand that pointing gestures allow the pointer to change the information state of a recipient? We used a third-party experimental scenario to examine whether 9- and 11-month-olds understand that a pointer's pointing gesture can inform a recipient about a target object. When the pointer pointed to a target, infants…

  19. Quantitative structure-property relationships for prediction of boiling point, vapor pressure, and melting point.

    Science.gov (United States)

    Dearden, John C

    2003-08-01

    Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.

  20. Professional SharePoint 2013 administration

    CERN Document Server

    Young, Shane; Klindt, Todd

    2013-01-01

    SharePoint admin author gurus return to prepare you for working with the new features of SharePoint 2013! The new iteration of SharePoint boasts exciting new features. However, any new version also comes with its fair share of challenges and that's where this book comes in. The team of SharePoint admin gurus returns to presents a fully updated resource that prepares you for making all the new SharePoint 2013 features work right. They cover all of the administration components of SharePoint 2013 in detail, and present a clear understanding of how they affect the role of the adminis

  1. Point-Source Contributions to the Water Quality of an Urban Stream

    Science.gov (United States)

    Little, S. F. B.; Young, M.; Lowry, C.

    2014-12-01

    Scajaquada Creek, which runs through the heart of the city of Buffalo, is a prime example of the ways in which human intervention and local geomorphology can impact water quality and urban hydrology. Beginning in the 1920's, the Creek has been partially channelized and connected to Buffalo's combined sewer system (CSS). At Forest Lawn Cemetery, where this study takes place, Scajaquada Creek emerges from a 3.5-mile tunnel built to route stream flow under the city. Collocated with the tunnel outlet is a discharge point for Buffalo's CSS, combined sewer outlet (CSO) #53. It is at this point that runoff and sanitary sewage discharge regularly during rain events. Initially, this study endeavored to create a spatial and temporal picture for this portion of the Creek, monitoring such parameters as conductivity, dissolved oxygen, pH, temperature, and turbidity, in addition to measuring Escherichia coli (E. coli) concentrations. As expected, these factors responded directly to seasonality, local geomorphology, and distance from the point source (CSO #53), displaying a overall, linear response. However, the addition of nitrate and phosphate testing to the study revealed an entirely separate signal from that previously observed. Concentrations of these parameters did not respond to location in the same manner as E. coli. Instead of decreasing with distance from the CSO, a distinct periodicity was observed, correlating with a series of outflow pipes lining the stream banks. It is hypothesized that nitrate and phosphate occurring in this stretch of Scajaquada Creek originate not from the CSO, but from fertilizers used to maintain the lawns within the subwatershed. These results provide evidence of the complexity related to water quality issues in urban streams as a result of point- and nonpoint-source hydrologic inputs.

  2. Gradient-free determination of isoelectric points of proteins on chip.

    Science.gov (United States)

    Łapińska, Urszula; Saar, Kadi L; Yates, Emma V; Herling, Therese W; Müller, Thomas; Challa, Pavan K; Dobson, Christopher M; Knowles, Tuomas P J

    2017-08-30

    The isoelectric point (pI) of a protein is a key characteristic that influences its overall electrostatic behaviour. The majority of conventional methods for the determination of the isoelectric point of a molecule rely on the use of spatial gradients in pH, although significant practical challenges are associated with such techniques, notably the difficulty in generating a stable and well controlled pH gradient. Here, we introduce a gradient-free approach, exploiting a microfluidic platform which allows us to perform rapid pH change on chip and probe the electrophoretic mobility of species in a controlled field. In particular, in this approach, the pH of the electrolyte solution is modulated in time rather than in space, as in the case for conventional determinations of the isoelectric point. To demonstrate the general approachability of this platform, we have measured the isoelectric points of representative set of seven proteins, bovine serum albumin, β-lactoglobulin, ribonuclease A, ovalbumin, human transferrin, ubiquitin and myoglobin in microlitre sample volumes. The ability to conduct measurements in free solution thus provides the basis for the rapid determination of isoelectric points of proteins under a wide variety of solution conditions and in small volumes.

  3. A Marked Point Process Framework for Extracellular Electrical Potentials

    Directory of Open Access Journals (Sweden)

    Carlos A. Loza

    2017-12-01

    Full Text Available Neuromodulations are an important component of extracellular electrical potentials (EEP, such as the Electroencephalogram (EEG, Electrocorticogram (ECoG and Local Field Potentials (LFP. This spatially temporal organized multi-frequency transient (phasic activity reflects the multiscale spatiotemporal synchronization of neuronal populations in response to external stimuli or internal physiological processes. We propose a novel generative statistical model of a single EEP channel, where the collected signal is regarded as the noisy addition of reoccurring, multi-frequency phasic events over time. One of the main advantages of the proposed framework is the exceptional temporal resolution in the time location of the EEP phasic events, e.g., up to the sampling period utilized in the data collection. Therefore, this allows for the first time a description of neuromodulation in EEPs as a Marked Point Process (MPP, represented by their amplitude, center frequency, duration, and time of occurrence. The generative model for the multi-frequency phasic events exploits sparseness and involves a shift-invariant implementation of the clustering technique known as k-means. The cost function incorporates a robust estimation component based on correntropy to mitigate the outliers caused by the inherent noise in the EEP. Lastly, the background EEP activity is explicitly modeled as the non-sparse component of the collected signal to further improve the delineation of the multi-frequency phasic events in time. The framework is validated using two publicly available datasets: the DREAMS sleep spindles database and one of the Brain-Computer Interface (BCI competition datasets. The results achieve benchmark performance and provide novel quantitative descriptions based on power, event rates and timing in order to assess behavioral correlates beyond the classical power spectrum-based analysis. This opens the possibility for a unifying point process framework of

  4. LOWERING ICECUBE'S ENERGY THRESHOLD FOR POINT SOURCE SEARCHES IN THE SOUTHERN SKY

    Energy Technology Data Exchange (ETDEWEB)

    Aartsen, M. G. [Department of Physics, University of Adelaide, Adelaide, 5005 (Australia); Abraham, K. [Physik-department, Technische Universität München, D-85748 Garching (Germany); Ackermann, M. [DESY, D-15735 Zeuthen (Germany); Adams, J. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch (New Zealand); Aguilar, J. A.; Ansseau, I. [Université Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels (Belgium); Ahlers, M. [Department of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI 53706 (United States); Ahrens, M. [Oskar Klein Centre and Department of Physics, Stockholm University, SE-10691 Stockholm (Sweden); Altmann, D.; Anton, G. [Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, D-91058 Erlangen (Germany); Andeen, K. [Department of Physics, Marquette University, Milwaukee, WI, 53201 (United States); Anderson, T.; Arlen, T. C. [Department of Physics, Pennsylvania State University, University Park, PA 16802 (United States); Archinger, M.; Baum, V. [Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz (Germany); Arguelles, C. [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Auffenberg, J. [III. Physikalisches Institut, RWTH Aachen University, D-52056 Aachen (Germany); Bai, X. [Physics Department, South Dakota School of Mines and Technology, Rapid City, SD 57701 (United States); Barwick, S. W. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Bay, R., E-mail: jacob.feintzeig@gmail.com, E-mail: naoko@icecube.wisc.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States); Collaboration: IceCube Collaboration; and others

    2016-06-20

    Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retaining efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.

  5. Bubble-point and dew-point equation for binary refrigerant mixture R22-R142b

    Energy Technology Data Exchange (ETDEWEB)

    Liancheng Tan; Zhongyou Zhao; Yonghong Duan (Xi' an Jiaotong Univ., Xi' an (China). Dept. of Power Machinery Engineering)

    1992-01-01

    A bubble-point and dew-point equation (in terms either of temperature or of pressure is suggested for the refrigerant mixture R22-R142b), which is regarded as one of the alternatives to R12. This equation has been examined with experimental data. A modified Rackett equation for the calculation of the bubble-point volume is also proposed. Compared with the experimental data, the rms errors in the calculated values of the bubble-point temperature, the dew-point temperature, and the bubble-point volume are 1.093%, 0.947%, and 1.120%, respectively. The calculation covers a wide range of temperatures and pressures, even near the critical point. It is shown how the equations are extrapolated to calculate other binary refrigerant mixtures. (author)

  6. Solutions to second order non-homogeneous multi-point BVPs using a fixed-point theorem

    Directory of Open Access Journals (Sweden)

    Yuji Liu

    2008-07-01

    Full Text Available In this article, we study five non-homogeneous multi-point boundary-value problems (BVPs of second order differential equations with the one-dimensional p-Laplacian. These problems have a common equation (in different function domains and different boundary conditions. We find conditions that guarantee the existence of at least three positive solutions. The results obtained generalize several known ones and are illustrated by examples. It is also shown that the approach for getting three positive solutions by using multi-fixed-point theorems can be extended to nonhomogeneous BVPs. The emphasis is on the nonhomogeneous boundary conditions and the nonlinear term involving first order derivative of the unknown. Some open problems are also proposed.

  7. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    Science.gov (United States)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of

  8. Renormalization group fixed points of foliated gravity-matter systems

    Energy Technology Data Exchange (ETDEWEB)

    Biemans, Jorn [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP),Radboud University Nijmegen,Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Platania, Alessia [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP),Radboud University Nijmegen,Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Department of Physics and Astronomy, University of Catania,Via S. Sofia 63, 95123 Catania (Italy); INFN, Catania section,Via S. Sofia 64, 95123, Catania (Italy); INAF, Catania Astrophysical Observatory,Via S. Sofia 78, 95123, Catania (Italy); Saueressig, Frank [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP),Radboud University Nijmegen,Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2017-05-17

    We employ the Arnowitt-Deser-Misner formalism to study the renormalization group flow of gravity minimally coupled to an arbitrary number of scalar, vector, and Dirac fields. The decomposition of the gravitational degrees of freedom into a lapse function, shift vector, and spatial metric equips spacetime with a preferred (Euclidean) “time”-direction. In this work, we provide a detailed derivation of the renormalization group flow of Newton’s constant and the cosmological constant on a flat Friedmann-Robertson-Walker background. Adding matter fields, it is shown that their contribution to the flow is the same as in the covariant formulation and can be captured by two parameters d{sub g}, d{sub λ}. We classify the resulting fixed point structure as a function of these parameters finding that the existence of non-Gaussian renormalization group fixed points is rather generic. In particular the matter content of the standard model and its most common extensions gives rise to one non-Gaussian fixed point with real critical exponents suitable for Asymptotic Safety. Moreover, we find non-Gaussian fixed points for any number of scalar matter fields, making the scenario attractive for cosmological model building.

  9. Bright point study. [of solar corona

    Science.gov (United States)

    Tang, F.; Harvey, K.; Bruner, M.; Kent, B.; Antonucci, E.

    1982-01-01

    Transition region and coronal observations of bright points by instruments aboard the Solar Maximum Mission and high resolution photospheric magnetograph observations on September 11, 1980 are presented. A total of 31 bipolar ephemeral regions were found in the photosphere from birth in 9.3 hours of combined magnetograph observations from three observatories. Two of the three ephemeral regions present in the field of view of the Ultraviolet Spectrometer-Polarimeter were observed in the C IV 1548 line. The unobserved ephemeral region was determined to be the shortest-lived (2.5 hr) and lowest in magnetic flux density (13G) of the three regions. The Flat Crystal Spectrometer observed only low level signals in the O VIII 18.969 A line, which were not statistically significant to be positively identified with any of the 16 ephemeral regions detected in the photosphere. In addition, the data indicate that at any given time there lacked a one-to-one correspondence between observable bright points and photospheric ephemeral regions, while more ephemeral regions were observed than their counterparts in the transition region and the corona.

  10. Use of digital image analysis to estimate fluid permeability of porous materials: Application of two-point correlation functions

    International Nuclear Information System (INIS)

    Berryman, J.G.; Blair, S.C.

    1986-01-01

    Scanning electron microscope images of cross sections of several porous specimens have been digitized and analyzed using image processing techniques. The porosity and specific surface area may be estimated directly from measured two-point spatial correlation functions. The measured values of porosity and image specific surface were combined with known values of electrical formation factors to estimate fluid permeability using one version of the Kozeny-Carman empirical relation. For glass bead samples with measured permeability values in the range of a few darcies, our estimates agree well ( +- 10--20%) with the measurements. For samples of Ironton-Galesville sandstone with a permeability in the range of hundreds of millidarcies, our best results agree with the laboratory measurements again within about 20%. For Berea sandstone with still lower permeability (tens of millidarcies), our predictions from the images agree within 10--30%. Best results for the sandstones were obtained by using the porosities obtained at magnifications of about 100 x (since less resolution and better statistics are required) and the image specific surface obtained at magnifications of about 500 x (since greater resolution is required)

  11. Hysteresis of Soil Point Water Retention Functions Determined by Neutron Radiography

    Science.gov (United States)

    Perfect, E.; Kang, M.; Bilheux, H.; Willis, K. J.; Horita, J.; Warren, J.; Cheng, C.

    2010-12-01

    Soil point water retention functions are needed for modeling flow and transport in partially-saturated porous media. Such functions are usually determined by inverse modeling of average water retention data measured experimentally on columns of finite length. However, the resulting functions are subject to the appropriateness of the chosen model, as well as the initial and boundary condition assumptions employed. Soil point water retention functions are rarely measured directly and when they are the focus is invariably on the main drying branch. Previous direct measurement methods include time domain reflectometry and gamma beam attenuation. Here we report direct measurements of the main wetting and drying branches of the point water retention function using neutron radiography. The measurements were performed on a coarse sand (Flint #13) packed into 2.6 cm diameter x 4 cm long aluminum cylinders at the NIST BT-2 (50 μm resolution) and ORNL-HFIR CG1D (70 μm resolution) imaging beamlines. The sand columns were saturated with water and then drained and rewetted under quasi-equilibrium conditions using a hanging water column setup. 2048 x 2048 pixel images of the transmitted flux of neutrons through the column were acquired at each imposed suction (~10-15 suction values per experiment). Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert’s law in conjunction with beam hardening and geometric corrections. The pixel rows were averaged and combined with information on the known distribution of suctions within the column to give 2048 point drying and wetting functions for each experiment. The point functions exhibited pronounced hysteresis and varied with column height, possibly due to differences in porosity caused by the packing procedure employed. Predicted point functions, extracted from the hanging water column volumetric data using the TrueCell inverse modeling procedure, showed very good agreement with the range of point

  12. Effect of saddle-point anisotropy on point-defect drift-diffusion into straight dislocations

    International Nuclear Information System (INIS)

    Skinner, B.C.; Woo, C.H.

    1983-02-01

    Effects on point-defect drift-diffusion in the strain fields of edge or screw dislocations, due to the anisotropy of the point defect in its saddle-point configuration, are investigated. Expressions for sink strength and bias that include the saddle-point shape effect are derived, both in the absence and presence of an externally applied stress. These are found to depend on intrinsic parameters such as the relaxation volume and the saddle-point shape of the point defects, and extrinsic parameters such as temperature and the magnitude and direction of the externally applied stress with respect to the line direction and Burgers vector direction of the dislocation. The theory is applied to fcc copper and bcc iron. It is found that screw dislocations are biased sinks and that the stress-induced bias differential for the edge dislocations depends much more on the line direction than the Burgers vector direction. Comparison with the stress-induced bias differential due to the usual SIPA effect is made. It is found that the present effect causes a bias differential that is more than an order of magnitude larger

  13. Automated Extraction of 3D Trees from Mobile LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Y. Yu

    2014-06-01

    Full Text Available This paper presents an automated algorithm for extracting 3D trees directly from 3D mobile light detection and ranging (LiDAR data. To reduce both computational and spatial complexities, ground points are first filtered out from a raw 3D point cloud via blockbased elevation filtering. Off-ground points are then grouped into clusters representing individual objects through Euclidean distance clustering and voxel-based normalized cut segmentation. Finally, a model-driven method is proposed to achieve the extraction of 3D trees based on a pairwise 3D shape descriptor. The proposed algorithm is tested using a set of mobile LiDAR point clouds acquired by a RIEGL VMX-450 system. The results demonstrate the feasibility and effectiveness of the proposed algorithm.

  14. Handbook of floating-point arithmetic

    CERN Document Server

    Muller, Jean-Michel; de Dinechin, Florent; Jeannerod, Claude-Pierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Torres, Serge

    2018-01-01

    This handbook is a definitive guide to the effective use of modern floating-point arithmetic, which has considerably evolved, from the frequently inconsistent floating-point number systems of early computing to the recent IEEE 754-2008 standard. Most of computational mathematics depends on floating-point numbers, and understanding their various implementations will allow readers to develop programs specifically tailored for the standard’s technical features. Algorithms for floating-point arithmetic are presented throughout the book and illustrated where possible by example programs which show how these techniques appear in actual coding and design. The volume itself breaks its core topic into four parts: the basic concepts and history of floating-point arithmetic; methods of analyzing floating-point algorithms and optimizing them; implementations of IEEE 754-2008 in hardware and software; and useful extensions to the standard floating-point system, such as interval arithmetic, double- and triple-word arithm...

  15. Magnetic Reconnection at a Three-dimensional Solar Null Point

    DEFF Research Database (Denmark)

    Frederiksen, Jacob Trier; Baumann, Gisela; Galsgaard, Klaus

    2012-01-01

    Using a specific solar null point reconnection case studied by Masson et al (2009; ApJ 700, 559) we investigate the dependence of the reconnection rate on boundary driving speed, numerical resolution, type of resistivity (constant or numerical), and assumed stratification (constant density or sol...

  16. From AWE-GEN to AWE-GEN-2d: a high spatial and temporal resolution weather generator

    Science.gov (United States)

    Peleg, Nadav; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2016-04-01

    A new weather generator, AWE-GEN-2d (Advanced WEather GENerator for 2-Dimension grid) is developed following the philosophy of combining physical and stochastic approaches to simulate meteorological variables at high spatial and temporal resolution (e.g. 2 km x 2 km and 5 min for precipitation and cloud cover and 100 m x 100 m and 1 h for other variables variable (temperature, solar radiation, vapor pressure, atmospheric pressure and near-surface wind). The model is suitable to investigate the impacts of climate variability, temporal and spatial resolutions of forcing on hydrological, ecological, agricultural and geomorphological impacts studies. Using appropriate parameterization the model can be used in the context of climate change. Here we present the model technical structure of AWE-GEN-2d, which is a substantial evolution of four preceding models (i) the hourly-point scale Advanced WEather GENerator (AWE-GEN) presented by Fatichi et al. (2011, Adv. Water Resour.) (ii) the Space-Time Realizations of Areal Precipitation (STREAP) model introduced by Paschalis et al. (2013, Water Resour. Res.), (iii) the High-Resolution Synoptically conditioned Weather Generator developed by Peleg and Morin (2014, Water Resour. Res.), and (iv) the Wind-field Interpolation by Non Divergent Schemes presented by Burlando et al. (2007, Boundary-Layer Meteorol.). The AWE-GEN-2d is relatively parsimonious in terms of computational demand and allows generating many stochastic realizations of current and projected climates in an efficient way. An example of model application and testing is presented with reference to a case study in the Wallis region, a complex orography terrain in the Swiss Alps.

  17. High spatial resolution whole-body MR angiography featuring parallel imaging: initial experience

    International Nuclear Information System (INIS)

    Quick, H.H.; Vogt, F.M.; Madewald, S.; Herborn, C.U.; Bosk, S.; Goehde, S.; Debatin, J.F.; Ladd, M.E.

    2004-01-01

    Materials and methods: whole-body multi-station MRA was performed with a rolling table platform (AngioSURF) on 5 volunteers in two imaging series: 1) standard imaging protocol, 2) modified high-resolution protocol employing PAT using the generalized autocalibrating partially parallel acquisitions (GRAPPA) algorithm with an acceleration factor of 3. For an intra-individual comparison of the two MR examinations, the arterial vasculature was divided into 30 segments. Signal-to-noise ratios (SNR) and contrast-to-noise ratios (CNR) were calculated for all 30 arterial segments of each subject. Vessel segment depiction was qualitatively assessed applying a 5-point scale to each of the segments. Image reconstruction times were recorded for the standard as well as the PAT protocol. Results: compared to the standard protocol, PAT allowed for increased spatial resolution through a 3-fold reduction in mean voxel size for each of the 5 stations. Mean SNR and CNR values over all specified vessel segments decreased by a factor of 1.58 and 1.56, respectively. Despite the reduced SNR and CNR, the depiction of all specified vessel segments increased in PAT images, reflecting the increased spatial resolution. Qualitative comparison of standard and PAT images showed an increase in vessel segment conspicuity with more detailed depiction of intramuscular arterial branches in all volunteers. The time for image data reconstruction of all 5 stations was significantly increased from about 10 minutes to 40 minutes when using the PAT acquisition. (orig.) [de

  18. Pro SharePoint 2013 administration

    CERN Document Server

    Garrett, Robert

    2013-01-01

    Pro SharePoint 2013 Administration is a practical guide to SharePoint 2013 for intermediate to advanced SharePoint administrators and power users, covering the out-of-the-box feature set and capabilities of Microsoft's collaboration and business productivity platform. SharePoint 2013 is an incredibly complex product, with many moving parts, new features, best practices, and 'gotchas.' Author Rob Garrett distills SharePoint's portfolio of features, capabilities, and utilities into an in-depth professional guide-with no fluff and copious advice-that is designed from scratch to be the manual Micr

  19. Nanotexturing of surfaces to reduce melting point.

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Ernest J.; Zubia, David (University of Texas at El Paso El Paso, TX); Mireles, Jose (Universidad Aut%C3%94onoma de Ciudad Ju%C3%94arez Ciudad Ju%C3%94arez, Mexico); Marquez, Noel (University of Texas at El Paso El Paso, TX); Quinones, Stella (University of Texas at El Paso El Paso, TX)

    2011-11-01

    This investigation examined the use of nano-patterned structures on Silicon-on-Insulator (SOI) material to reduce the bulk material melting point (1414 C). It has been found that sharp-tipped and other similar structures have a propensity to move to the lower energy states of spherical structures and as a result exhibit lower melting points than the bulk material. Such a reduction of the melting point would offer a number of interesting opportunities for bonding in microsystems packaging applications. Nano patterning process capabilities were developed to create the required structures for the investigation. One of the technical challenges of the project was understanding and creating the specialized conditions required to observe the melting and reshaping phenomena. Through systematic experimentation and review of the literature these conditions were determined and used to conduct phase change experiments. Melting temperatures as low as 1030 C were observed.

  20. Imaging study on acupuncture points

    Science.gov (United States)

    Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.

    2009-09-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  1. Imaging study on acupuncture points

    International Nuclear Information System (INIS)

    Yan, X H; Zhang, X Y; Liu, C L; Dang, R S; Ando, M; Sugiyama, H; Chen, H S; Ding, G H

    2009-01-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  2. The End of Points

    Science.gov (United States)

    Feldman, Jo

    2018-01-01

    Have teachers become too dependent on points? This article explores educators' dependency on their points systems, and the ways that points can distract teachers from really analyzing students' capabilities and achievements. Feldman argues that using a more subjective grading system can help illuminate crucial information about students and what…

  3. Sensitivity of surface roughness parameters to changes in the density of scanning points in multi-scale AFM studies. Application to a biomaterial surface

    International Nuclear Information System (INIS)

    Mendez-Vilas, A.; Bruque, J.M.; Gonzalez-Martin, M.L.

    2007-01-01

    In the field of biomaterials surfaces, the ability of the atomic force microscope (AFM) to access the surface structure at unprecedented spatial (vertical and lateral) resolution, is helping in a better understanding on how topography affects the overall interaction of biological cells with the material surface. Since cells in a wide range of sizes are in contact with the biomaterial surface, a quantification of the surface structure in such a wide range of dimensional scales is needed. With the advent of the AFM, this can be routinely done in the lab. In this work, we show that even when it is clear that such a scale-dependent study is needed, AFM maps of the biomaterial surface taken at different scanning lengths are not completely consistent when they are taken at the same scanning resolution, as it is usually done: AFM images of different scanning areas have different point-to-point physical distances. We show that this effect influences the quantification of the average (R a ) and rms (R q ) roughness parameters determined at different length scales. This is the first time this inconsistency is reported and should be taken into account when roughness is measured in this way. Since differences will be in general in the range of nanometres, this is especially interesting for those processes involving the interaction of the biomaterial surface with small biocolloids as bacteria, while this effect should not represent any problems for larger animal cells

  4. Pointo - a Low Cost Solution to Point Cloud Processing

    Science.gov (United States)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a

  5. Beginning SharePoint Designer 2010

    CERN Document Server

    Windischman, Woodrow W; Rehmani, Asif

    2010-01-01

    Teaching Web designers, developers, and IT professionals how to use the new version of SharePoint Designer. Covering both the design and business applications of SharePoint Designer, this complete Wrox guide brings readers thoroughly up to speed on how to use SharePoint Designer in an enterprise. You'll learn to create and modify web pages, use CSS editing tools to modify themes, use Data View to create interactivity with SharePoint and other data, and much more. Coverage includes integration points with Visual Studio, Visio, and InfoPath.: Shows web designers, developers, and IT professionals

  6. SharePoint 2013 for dummies

    CERN Document Server

    Withee, Ken

    2013-01-01

    The bestselling guide on running SharePoint, now updated to cover all the new features of SharePoint 2013 SharePoint Portal Server is an essential part of the enterprise infrastructure for many businesses. Building on the success of previous versions of SharePoint For Dummies, this new edition covers all the latest features of SharePoint 2013 and provides you with an easy-to-understand resource for making the most of all that this version has to offer. You'll learn how to get a site up and running, branded, and populated with content, workflow, and management. In addition, t

  7. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Garrett, John; Ge, Yongshuai [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States)

    2014-07-15

    Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDI{sub vol} =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than

  8. The study and analysis of point-to-point vibration isolation and its utility to seismic base isolator

    International Nuclear Information System (INIS)

    Mehboob, M.; Qureshi, A.S.

    2001-01-01

    This paper presents systematic approach to regarding the piece wise vibration isolation generally termed as point-to-point vibration isolation system, and its broad spectrum-utilities to an economic seismic base isolation. Transfer of curves for coulomb damped i.e. softening damper flexible mountings are presented and the utility has been proved equally good for both rigidly and elastically coupled damping. It is clearly shown that the very closest solutions are easily obtainable for both slipping and sticking nature of phases of the motion. This eliminates the conventional and conceptual approximations based on the linearization of the damping. This new concept will not endanger-super-structure if mounted on such isolation systems. (author)

  9. Pro SharePoint 2010 Search

    CERN Document Server

    Noble, J; Bakman-Mikalski, Dan

    2011-01-01

    Pro SharePoint 2010 Search gives you expert advice on planning, deploying and customizing searches in SharePoint 2010. Drawing on the authors' extensive experience of working with real-world SharePoint deployments, this book teaches everything you'll need to know to create well-designed SharePoint solutions that always keep the end-user's experience in mind. Increase your search efficiency with SharePoint 2010's search functionality: extend the search user interface using third-party tools, and utilize analytics to improve relevancy. This practical hands-on book is a must-have resource for any

  10. Imaging Cajal's neuronal avalanche: how wide-field optical imaging of the point-spread advanced the understanding of neocortical structure-function relationship.

    Science.gov (United States)

    Frostig, Ron D; Chen-Bee, Cynthia H; Johnson, Brett A; Jacobs, Nathan S

    2017-07-01

    This review brings together a collection of studies that specifically use wide-field high-resolution mesoscopic level imaging techniques (intrinsic signal optical imaging; voltage-sensitive dye optical imaging) to image the cortical point spread (PS): the total spread of cortical activation comprising a large neuronal ensemble evoked by spatially restricted (point) stimulation of the sensory periphery (e.g., whisker, pure tone, point visual stimulation). The collective imaging findings, combined with supporting anatomical and electrophysiological findings, revealed some key aspects about the PS including its very large (radius of several mm) and relatively symmetrical spatial extent capable of crossing cytoarchitectural borders and trespassing into other cortical areas; its relationship with underlying evoked subthreshold activity and underlying anatomical system of long-range horizontal projections within gray matter, both also crossing borders; its contextual modulation and plasticity; the ability of its relative spatiotemporal profile to remain invariant to major changes in stimulation parameters; its potential role as a building block for integrative cortical activity; and its ubiquitous presence across various cortical areas and across mammalian species. Together, these findings advance our understanding about the neocortex at the mesoscopic level by underscoring that the cortical PS constitutes a fundamental motif of neocortical structure-function relationship.

  11. Point defects in lithium fluoride films for micro-radiography, X-ray microscopy and photonic applications

    Energy Technology Data Exchange (ETDEWEB)

    Bonfigli, F.; Flora, F.; Marolo, T.; Montereali, R.M.; Baldacchini, G. [ENEA, UTS Tecnologie Fisiche Avanzate, C.R. Frascati, Via E. Fermi, 45, 00044 Frascati (Rome) (Italy); Faenov, A.Ya.; Pikuz, T.A. [MISDC of VNIIFTRI Mendeleevo, Moscow region, 141570 (Russian Federation); Nichelatti, E. [ENEA, UTS Tecnologie Fisiche Avanzate, C.R. Casaccia, Via Anguillarese, 301, 00060 Santa Maria di Galeria (Rome) (Italy); Reale, L. [Universita dell' Aquila e INFN, Dip. di Fisica, Coppito, L' Aquila (Italy)

    2005-01-01

    Point defects in lithium fluoride (LiF) have recently attracted renewed attention due the exciting results obtained in the realisation of miniaturised optical devices. Among light-emitting materials, LiF is of particular interest because it is almost not hygroscopic and can host, even at room temperature, stable color centers (CCs) that emit light in the visible and in the near infrared spectral range under optical excitation. The increasing demand for low-dimensionality photonic devices imposes the use of advanced irradiation methods for producing luminescent structures with high spatial resolution. An innovative irradiation technique to produce luminescent CCs in LiF crystals and films by using an extreme ultra-violet and soft X-ray laser-plasma source will be presented. This technique is capable to induce colored patterns with submicrometric spatial resolution on large areas in a short exposure time as compared with other irradiation methods. Luminescent regular arrays produced by this irradiation technique will be shown. Recently, the idea of using a LiF film as image detector for X-ray microscopy and micro-radiography based on optically-stimulated luminescence from CCs has been developed. (copyright 2005 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  12. Methods for registration laser scanner point clouds in forest stands

    International Nuclear Information System (INIS)

    Bienert, A.; Pech, K.; Maas, H.-G.

    2011-01-01

    Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de

  13. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  14. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  15. Design and fabrication of a diffractive beam splitter for dual-wavelength and concurrent irradiation of process points.

    Science.gov (United States)

    Amako, Jun; Shinozaki, Yu

    2016-07-11

    We report on a dual-wavelength diffractive beam splitter designed for use in parallel laser processing. This novel optical element generates two beam arrays of different wavelengths and allows their overlap at the process points on a workpiece. To design the deep surface-relief profile of a splitter using a simulated annealing algorithm, we introduce a heuristic but practical scheme to determine the maximum depth and the number of quantization levels. The designed corrugations were fabricated in a photoresist by maskless grayscale exposure using a high-resolution spatial light modulator. We characterized the photoresist splitter, thereby validating the proposed beam-splitting concept.

  16. Inhibition effect of calcium hydroxide point and chlorhexidine point on root canal bacteria of necrosis teeth

    Directory of Open Access Journals (Sweden)

    Andry Leonard Je

    2006-03-01

    Full Text Available Calcium Hydroxide point and Chlorhexidine point are new drugs for eliminating bacteria in the root canal. The points slowly and controly realease Calcium Hydroxide and Chlorhexidine into root canal. The purpose of the study was to determined the effectivity of Calcium hydroxide point (Calcium hydroxide plus point and Chlorhexidine point in eleminating the root canal bacteria of nescrosis teeth. In this study 14 subjects were divided into 2 groups. The first group was treated with Calcium hydroxide point and the second was treated with Chlorhexidine poin. The bacteriological sampling were measured with spectrofotometry. The Paired T Test analysis (before and after showed significant difference between the first and second group. The Independent T Test which analysed the effectivity of both groups had not showed significant difference. Although there was no significant difference in statistical test, the result of second group eliminate more bacteria than the first group. The present finding indicated that the use of Chlorhexidine point was better than Calcium hydroxide point in seven days period. The conclusion is Chlorhexidine point and Calcium hydroxide point as root canal medicament effectively eliminate root canal bacteria of necrosis teeth.

  17. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    Science.gov (United States)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  18. Single cell analysis of G1 check points-the relationship between the restriction point and phosphorylation of pRb

    International Nuclear Information System (INIS)

    Martinsson, Hanna-Stina; Starborg, Maria; Erlandsson, Fredrik; Zetterberg, Anders

    2005-01-01

    Single cell analysis allows high resolution investigation of temporal relationships between transition events in G 1 . It has been suggested that phosphorylation of the retinoblastoma tumor suppressor protein (pRb) is the molecular mechanism behind passage through the restriction point (R). We performed a detailed single cell study of the temporal relationship between R and pRb phosphorylation in human fibroblasts using time lapse video-microscopy combined with immunocytochemistry. Four principally different criteria for pRb phosphorylation were used, namely (i) phosphorylation of residues Ser 795 and Ser 780 (ii) degree of pRb-association with the nuclear structure, a property that is closely related with pRb phosphorylation status, (iii) release of the transcription factor E2F-1 from pRb, and (iv) accumulation of cyclin E, which is dependent on phosphorylation of pRb. The analyses of individual cells revealed that passage through R preceded phosphorylation of pRb, which occurs in a gradually increasing proportion of cells in late G 1 . Our data clearly suggest that pRb phosphorylation is not the molecular mechanism behind the passage through R. The restriction point and phosphorylation of pRb thus seem to represent two separate check point in G 1

  19. New England observed and predicted median July stream/river temperature points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted median July stream/river temperatures in New England based on a spatial statistical network...

  20. New England observed and predicted median August stream/river temperature points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted median August stream/river temperatures in New England based on a spatial statistical network...

  1. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    Science.gov (United States)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data

  2. Scaling point/plot measurements of greenhouse gas fluxes, balances and intensities to whole-farms and landscapes

    NARCIS (Netherlands)

    Rosenstock, T.S.; Rufino, Mariana; Chirinda, N.; Bussel, van L.G.J.; Reidsma, P.; Butterbach-Bahl, K.

    2015-01-01

    Measurements of nutrient stocks and greenhouse gas (GHG) fluxes are typically collected at very local scales (<1 to 30 m2) and then extrapolated to estimate impacts at larger spatial extents (farms, landscapes, or even countries). Translating point measurements to higher levels of aggregation is

  3. Influence of temporally variable groundwater flow conditions on point measurements and contaminant mass flux estimations

    DEFF Research Database (Denmark)

    Rein, Arno; Bauer, S; Dietrich, P

    2009-01-01

    Monitoring of contaminant concentrations, e.g., for the estimation of mass discharge or contaminant degradation rates. often is based on point measurements at observation wells. In addition to the problem, that point measurements may not be spatially representative. a further complication may ari...

  4. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    Directory of Open Access Journals (Sweden)

    M. Wang

    2015-05-01

    Full Text Available The conservation of drinking water source reservoirs has a close relationship between regional economic development and people’s livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN and total phosphorus (TP. The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  5. Do acupuncture points exist?

    International Nuclear Information System (INIS)

    Yan Xiaohui; Zhang Xinyi; Liu Chenglin; Dang, Ruishan; Huang Yuying; He Wei; Ding Guanghong

    2009-01-01

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian. (note)

  6. Do acupuncture points exist?

    Energy Technology Data Exchange (ETDEWEB)

    Yan Xiaohui; Zhang Xinyi [Department of Physics, Surface Physics Laboratory (State Key Laboratory), and Synchrotron Radiation Research Center of Fudan University, Shanghai 200433 (China); Liu Chenglin [Physics Department of Yancheng Teachers' College, Yancheng 224002 (China); Dang, Ruishan [Second Military Medical University, Shanghai 200433 (China); Huang Yuying; He Wei [Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100039 (China); Ding Guanghong [Shanghai Research Center of Acupuncture and Meridian, Pudong, Shanghai 201203 (China)

    2009-05-07

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian. (note)

  7. Multi-Point Measurements to Characterize Radiation Belt Electron Precipitation Loss

    Science.gov (United States)

    Blum, L. W.

    2017-12-01

    Multipoint measurements in the inner magnetosphere allow the spatial and temporal evolution of various particle populations and wave modes to be disentangled. To better characterize and quantify radiation belt precipitation loss, we utilize multi-point measurements both to study precipitating electrons directly as well as the potential drivers of this loss process. Magnetically conjugate CubeSat and balloon measurements are combined to estimate of the temporal and spatial characteristics of dusk-side precipitation features and quantify loss due to these events. To then understand the drivers of precipitation events, and what determines their spatial structure, we utilize measurements from the dual Van Allen Probes to estimate spatial and temporal scales of various wave modes in the inner magnetosphere, and compare these to precipitation characteristics. The structure, timing, and spatial extent of waves are compared to those of MeV electron precipitation during a few individual events to determine when and where EMIC waves cause radiation belt electron precipitation. Magnetically conjugate measurements provide observational support of the theoretical picture of duskside interaction of EMIC waves and MeV electrons leading to radiation belt loss. Finally, understanding the drivers controlling the spatial scales of wave activity in the inner magnetosphere is critical for uncovering the underlying physics behind the wave generation as well as for better predicting where and when waves will be present. Again using multipoint measurements from the Van Allen Probes, we estimate the spatial and temporal extents and evolution of plasma structures and their gradients in the inner magnetosphere, to better understand the drivers of magnetospheric wave characteristic scales. In particular, we focus on EMIC waves and the plasma parameters important for their growth, namely cold plasma density and cool and warm ion density, anisotropy, and composition.

  8. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  9. An Innovative Metric to Evaluate Satellite Precipitation's Spatial Distribution

    Science.gov (United States)

    Liu, H.; Chu, W.; Gao, X.; Sorooshian, S.

    2011-12-01

    Thanks to its capability to cover the mountains, where ground measurement instruments cannot reach, satellites provide a good means of estimating precipitation over mountainous regions. In regions with complex terrains, accurate information on high-resolution spatial distribution of precipitation is critical for many important issues, such as flood/landslide warning, reservoir operation, water system planning, etc. Therefore, in order to be useful in many practical applications, satellite precipitation products should possess high quality in characterizing spatial distribution. However, most existing validation metrics, which are based on point/grid comparison using simple statistics, cannot effectively measure satellite's skill of capturing the spatial patterns of precipitation fields. This deficiency results from the fact that point/grid-wised comparison does not take into account of the spatial coherence of precipitation fields. Furth more, another weakness of many metrics is that they can barely provide information on why satellite products perform well or poor. Motivated by our recent findings of the consistent spatial patterns of the precipitation field over the western U.S., we developed a new metric utilizing EOF analysis and Shannon entropy. The metric can be derived through two steps: 1) capture the dominant spatial patterns of precipitation fields from both satellite products and reference data through EOF analysis, and 2) compute the similarities between the corresponding dominant patterns using mutual information measurement defined with Shannon entropy. Instead of individual point/grid, the new metric treat the entire precipitation field simultaneously, naturally taking advantage of spatial dependence. Since the dominant spatial patterns are shaped by physical processes, the new metric can shed light on why satellite product can or cannot capture the spatial patterns. For demonstration, a experiment was carried out to evaluate a satellite

  10. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdiscplinary Danish Institute for International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...... for all document types. Statistical significant correlations were only found between WoS and GS and the two publication point systems in between, respectively. The study demonstrates how the nCPPI can be applied to institutions as evaluation tools supplementary to JCI in various combinations...

  11. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  12. Witnesses to the truth: Mark's point of view

    African Journals Online (AJOL)

    2016-08-12

    Aug 12, 2016 ... given to the role, function and rhetorical impact of point of view. It is argued that ... his point of view.6. Because the voice from heaven addressed Jesus directly, ..... τῶν Φαρισαίων) see that Jesus is sitting with the tax collectors.

  13. Miniature x-ray point source for alignment and calibration of x-ray optics

    International Nuclear Information System (INIS)

    Price, R.H.; Boyle, M.J.; Glaros, S.S.

    1977-01-01

    A miniature x-ray point source of high brightness similar to that of Rovinsky, et al. is described. One version of the x-ray source is used to align the x-ray optics on the Argus and Shiva laser systems. A second version is used to determine the spatial and spectral transmission functions of the x-ray optics. The spatial and spectral characteristics of the x-ray emission from the x-ray point source are described. The physical constraints including size, intensity and thermal limitations, and useful lifetime are discussed. The alignment and calibration techniques for various x-ray optics and detector combinations are described

  14. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    Science.gov (United States)

    Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.

    2002-01-01

    The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.

  15. PowerPoint 2013 bible

    CERN Document Server

    Wempen, Faithe

    2013-01-01

    Master PowerPoint and improve your presentation skills with one book! In today's business climate, you need to know PowerPoint inside and out, and that's not all. You also need to be able to make a presentation that makes an impact. From using sophisticated transitions and animation in your PowerPoint presentations to interfacing in person with your audience, this information-packed book helps you succeed. Start creating professional-quality slides that captivate audiences and discover essential tips and techniques for making first-rate presentations, whether you're at a podium or

  16. A Survivable Wavelength Division Multiplexing Passive Optical Network with Both Point-to-Point Service and Broadcast Service Delivery

    Science.gov (United States)

    Ma, Xuejiao; Gan, Chaoqin; Deng, Shiqi; Huang, Yan

    2011-11-01

    A survivable wavelength division multiplexing passive optical network enabling both point-to-point service and broadcast service is presented and demonstrated. This architecture provides an automatic traffic recovery against feeder and distribution fiber link failure, respectively. In addition, it also simplifies the protection design for multiple services transmission in wavelength division multiplexing passive optical networks.

  17. Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning

    Science.gov (United States)

    Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.

    2017-12-01

    Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.

  18. Magnonic triply-degenerate nodal points

    Science.gov (United States)

    Owerre, S. A.

    2017-12-01

    We generalize the concept of triply-degenerate nodal points to non-collinear antiferromagnets. Here, we introduce this concept to insulating quantum antiferromagnets on the decorated honeycomb lattice, with spin-1 bosonic quasiparticle excitations known as magnons. We demonstrate the existence of magnonic surface states with constant energy contours that form pairs of magnonic arcs connecting the surface projection of the magnonic triple nodal points. The quasiparticle excitations near the triple nodal points represent three-component bosons beyond that of magnonic Dirac, Weyl, and nodal-line cases. They can be regarded as a direct reflection of the intrinsic spin carried by magnons. Furthermore, we show that the magnonic triple nodal points can split into magnonic Weyl points, as the system transits from a non-collinear spin structure to a non-coplanar one with a non-zero scalar spin chirality. Our results not only apply to insulating antiferromagnets, but also provide a platform to seek for triple nodal points in metallic antiferromagnets.

  19. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    OpenAIRE

    Kotasidis Fotis A.; Kotasidis Fotis A.; Angelis Georgios I.; Anton-Rodriguez Jose; Matthews Julian C.; Reader Andrew J.; Reader Andrew J.; Zaidi Habib; Zaidi Habib; Zaidi Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However due to the short half life of clinically used isotopes other long lived isotopes not used in clinical practice are used to perform the PSF measurements. As such non optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction usuall...

  20. Magic Pointing for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbegi, Diako; Pederson, Thomas

    2015-01-01

    In this paper, we propose a combination of head and eye movements for touchlessly controlling the "mouse pointer" on eyewear devices, exploiting the speed of eye pointing and accuracy of head pointing. The method is a wearable computer-targeted variation of the original MAGIC pointing approach...... which combined gaze tracking with a classical mouse device. The result of our experiment shows that the combination of eye and head movements is faster than head pointing for far targets and more accurate than eye pointing....

  1. Computing bubble-points of CO

    NARCIS (Netherlands)

    Ramdin, M.; Balaji, S.P.; Vicent Luna, J.M.; Torres-Knoop, A; Chen, Q.; Dubbeldam, D.; Calero, S; de Loos, T.W.; Vlugt, T.J.H.

    2016-01-01

    Computing bubble-points of multicomponent mixtures using Monte Carlo simulations is a non-trivial task. A new method is used to compute gas compositions from a known temperature, bubble-point pressure, and liquid composition. Monte Carlo simulations are used to calculate the bubble-points of

  2. Mobile Laser Scanning along Dieppe coastal cliffs: reliability of the acquired point clouds applied to rockfall assessments

    Science.gov (United States)

    Michoud, Clément; Carrea, Dario; Augereau, Emmanuel; Cancouët, Romain; Costa, Stéphane; Davidson, Robert; Delacourt, Chirstophe; Derron, Marc-Henri; Jaboyedoff, Michel; Letortu, Pauline; Maquaire, Olivier

    2013-04-01

    Dieppe coastal cliffs, in Normandy, France, are mainly formed by sub-horizontal deposits of chalk and flintstone. Largely destabilized by an intense weathering and the Channel sea erosion, small and large rockfalls are regularly observed and contribute to retrogressive cliff processes. During autumn 2012, cliff and intertidal topographies have been acquired with a Terrestrial Laser Scanner (TLS) and a Mobile Laser Scanner (MLS), coupled with seafloor bathymetries realized with a multibeam echosounder (MBES). MLS is a recent development of laser scanning based on the same theoretical principles of aerial LiDAR, but using smaller, cheaper and portable devices. The MLS system, which is composed by an accurate dynamic positioning and orientation (INS) devices and a long range LiDAR, is mounted on a marine vessel; it is then possible to quickly acquire in motion georeferenced LiDAR point clouds with a resolution of about 15 cm. For example, it takes about 1 h to scan of shoreline of 2 km long. MLS is becoming a promising technique supporting erosion and rockfall assessments along the shores of lakes, fjords or seas. In this study, the MLS system used to acquire cliffs and intertidal areas of the Cap d'Ailly was composed by the INS Applanix POS-MV 320 V4 and the LiDAR Optech Ilirs LR. On the same day, three MLS scans with large overlaps (J1, J21 and J3) have been performed at ranges from 600 m at 4 knots (low tide) up to 200 m at 2.2 knots (up tide) with a calm sea at 2.5 Beaufort (small wavelets). Mean scan resolutions go from 26 cm for far scan (J1) to about 8.1 cm for close scan (J3). Moreover, one TLS point cloud on this test site has been acquired with a mean resolution of about 2.3 cm, using a Riegl LMS Z390i. In order to quantify the reliability of the methodology, comparisons between scans have been realized with the software Polyworks™, calculating shortest distances between points of one cloud and the interpolated surface of the reference point cloud. A Mat

  3. ACCESS TO LEP POINT 5, CESSY (FRANCE)

    CERN Multimedia

    1999-01-01

    At the public environmental impact enquiry for the LHC project, the municipal authorities at Cessy suggested creating a new approach road to the civil engineering site (Point5) to ensure that materials deliveries by road are kept well away from housing along the Route de la Plaine.Following this recommendation, a track called the Chemin du Milieu has been upgraded into a road, and has been made available for the sole use of construction firms involved in building work at Point 5.The 'Dragados-Seli' consortium will be in charge of site surveillance for the new approach road.With effect from 29 March 1999, the present entrance will be closed to civil engineering firms and reserved for LEP installations maintenance services under the SL Division site managers, Mr. R. Spigato (tel. 160374) and P. Rey (tel. 160375).For obvious security reasons, those needing to use Point 5 are requested to keep the gate locked, even while they are on site, so as to prevent unauthorised persons from gaining access to the civil engi...

  4. Spatial resolution requirements for digital radiology

    International Nuclear Information System (INIS)

    Seeley, G.W.; Dallas, W.J.; Guillian, J.; Ovitt, T.; Standen, J.

    1990-01-01

    This paper describes research to define the needed spatial resolution for maintaining diagnostic accuracy in digital systems. Posteroanterior images from 30 normal and 30 abnormal studies of patients with various stages of interstitial disease were digitized at 51 p/mm with 12 bits of gray level and then processed in a computer to reduce spatial resolution from 5.0 to 2.5, 1.875, and in 1.25 Ip/mm. A Kodak laser writer using a LUT devised to ensure the copies had equal densities to those measured from the original images was used to write the images back to film. These film images were then shown to radiologists (one resolution level per radiologist). They were asked to give their diagnosis and certainty for each image (receiver operating characteristic [ROC] paradigm) and also to rate each image on overall spatial and contrast resolution as well as the visibility of seven diagnostically important structures

  5. Fermat's point from five perspectives

    Science.gov (United States)

    Park, Jungeun; Flores, Alfinio

    2015-04-01

    The Fermat point of a triangle is the point such that minimizes the sum of the distances from that point to the three vertices. Five approaches to study the Fermat point of a triangle are presented in this article. First, students use a mechanical device using masses, strings and pulleys to study the Fermat point as the one that minimizes the potential energy of the system. Second, students use soap films between parallel planes connecting three pegs. The tension on the film will be minimal when the sum of distances is minimal. Third, students use an empirical approach, measuring distances in an interactive GeoGebra page. Fourth, students use Euclidean geometry arguments for two proofs based on the Torricelli configuration, and one using Viviani's Theorem. And fifth, the kinematic method is used to gain additional insight on the size of the angles between the segments joining the Fermat point with the vertices.

  6. Realization of the Temperature Scale in the Range from 234.3 K (Hg Triple Point) to 1084.62°C (Cu Freezing Point) in Croatia

    Science.gov (United States)

    Zvizdic, Davor; Veliki, Tomislav; Grgec Bermanec, Lovorka

    2008-06-01

    This article describes the realization of the International Temperature Scale in the range from 234.3 K (mercury triple point) to 1084.62°C (copper freezing point) at the Laboratory for Process Measurement (LPM), Faculty of Mechanical Engineering and Naval Architecture (FSB), University of Zagreb. The system for the realization of the ITS-90 consists of the sealed fixed-point cells (mercury triple point, water triple point and gallium melting point) and the apparatus designed for the optimal realization of open fixed-point cells which include the gallium melting point, tin freezing point, zinc freezing point, aluminum freezing point, and copper freezing point. The maintenance of the open fixed-point cells is described, including the system for filling the cells with pure argon and for maintaining the pressure during the realization.

  7. A study of spatial resolution in pollution exposure modelling

    Directory of Open Access Journals (Sweden)

    Gustafsson Susanna

    2007-06-01

    Full Text Available Abstract Background This study is part of several ongoing projects concerning epidemiological research into the effects on health of exposure to air pollutants in the region of Scania, southern Sweden. The aim is to investigate the optimal spatial resolution, with respect to temporal resolution, for a pollutant database of NOx-values which will be used mainly for epidemiological studies with durations of days, weeks or longer periods. The fact that a pollutant database has a fixed spatial resolution makes the choice critical for the future use of the database. Results The results from the study showed that the accuracy between the modelled concentrations of the reference grid with high spatial resolution (100 m, denoted the fine grid, and the coarser grids (200, 400, 800 and 1600 meters improved with increasing spatial resolution. When the pollutant values were aggregated in time (from hours to days and weeks the disagreement between the fine grid and the coarser grids were significantly reduced. The results also illustrate a considerable difference in optimal spatial resolution depending on the characteristic of the study area (rural or urban areas. To estimate the accuracy of the modelled values comparison were made with measured NOx values. The mean difference between the modelled and the measured value were 0.6 μg/m3 and the standard deviation 5.9 μg/m3 for the daily difference. Conclusion The choice of spatial resolution should not considerably deteriorate the accuracy of the modelled NOx values. Considering the comparison between modelled and measured values we estimate that an error due to coarse resolution greater than 1 μg/m3 is inadvisable if a time resolution of one day is used. Based on the study of different spatial resolutions we conclude that for urban areas a spatial resolution of 200–400 m is suitable; and for rural areas the spatial resolution could be coarser (about 1600 m. This implies that we should develop a pollutant

  8. Scanning SQUID susceptometers with sub-micron spatial resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kirtley, John R., E-mail: jkirtley@stanford.edu; Rosenberg, Aaron J.; Palmstrom, Johanna C.; Holland, Connor M.; Moler, Kathryn A. [Department of Applied Physics, Stanford University, Stanford, California 94305-4045 (United States); Paulius, Lisa [Department of Physics, Western Michigan University, Kalamazoo, Michigan 49008-5252 (United States); Spanton, Eric M. [Department of Physics, Stanford University, Stanford, California 94305-4045 (United States); Schiessl, Daniel [Attocube Systems AG, Königinstraße 11A, 80539 Munich (Germany); Jermain, Colin L.; Gibbons, Jonathan [Department of Physics, Cornell University, Cornell, Ithaca, New York 14853 (United States); Fung, Y.-K.K.; Gibson, Gerald W. [IBM Research Division, T. J. Watson Research Center, Yorktown Heights, New York 10598 (United States); Huber, Martin E. [Department of Physics, University of Colorado Denver, Denver, Colorado 80217-3364 (United States); Ralph, Daniel C. [Department of Physics, Cornell University, Cornell, Ithaca, New York 14853 (United States); Kavli Institute at Cornell, Ithaca, New York 14853 (United States); Ketchen, Mark B. [OcteVue, Hadley, Massachusetts 01035 (United States)

    2016-09-15

    Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ{sub 0}/Hz{sup 1/2}. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes the spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.

  9. Scanning SQUID susceptometers with sub-micron spatial resolution

    International Nuclear Information System (INIS)

    Kirtley, John R.; Rosenberg, Aaron J.; Palmstrom, Johanna C.; Holland, Connor M.; Moler, Kathryn A.; Paulius, Lisa; Spanton, Eric M.; Schiessl, Daniel; Jermain, Colin L.; Gibbons, Jonathan; Fung, Y.-K.K.; Gibson, Gerald W.; Huber, Martin E.; Ralph, Daniel C.; Ketchen, Mark B.

    2016-01-01

    Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ_0/Hz"1"/"2. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes the spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.

  10. The effects of spatial sampling choices on MR temperature measurements.

    Science.gov (United States)

    Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L

    2011-02-01

    The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.

  11. Calculation of the spatial resolution in two-photon absorption spectroscopy applied to plasma diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Lechuga, M. [Departamento de Física Teórica, Atómica y Óptica, Universidad de Valladolid, 47011-Valladolid (Spain); Laser Processing Group, Instituto de Óptica “Daza de Valdés,” CSIC, 28006-Madrid (Spain); Fuentes, L. M. [Departamento de Física Aplicada, Universidad de Valladolid, 47011-Valladolid (Spain); Grützmacher, K.; Pérez, C., E-mail: concha@opt.uva.es; Rosa, M. I. de la [Departamento de Física Teórica, Atómica y Óptica, Universidad de Valladolid, 47011-Valladolid (Spain)

    2014-10-07

    We report a detailed characterization of the spatial resolution provided by two-photon absorption spectroscopy suited for plasma diagnosis via the 1S-2S transition of atomic hydrogen for optogalvanic detection and laser induced fluorescence (LIF). A precise knowledge of the spatial resolution is crucial for a correct interpretation of measurements, if the plasma parameters to be analysed undergo strong spatial variations. The present study is based on a novel approach which provides a reliable and realistic determination of the spatial resolution. Measured irradiance distribution of laser beam waists in the overlap volume, provided by a high resolution UV camera, are employed to resolve coupled rate equations accounting for two-photon excitation, fluorescence decay and ionization. The resulting three-dimensional yield distributions reveal in detail the spatial resolution for optogalvanic and LIF detection and related saturation due to depletion. Two-photon absorption profiles broader than the Fourier transform-limited laser bandwidth are also incorporated in the calculations. The approach allows an accurate analysis of the spatial resolution present in recent and future measurements.

  12. Calculation of the spatial resolution in two-photon absorption spectroscopy applied to plasma diagnosis

    International Nuclear Information System (INIS)

    Garcia-Lechuga, M.; Fuentes, L. M.; Grützmacher, K.; Pérez, C.; Rosa, M. I. de la

    2014-01-01

    We report a detailed characterization of the spatial resolution provided by two-photon absorption spectroscopy suited for plasma diagnosis via the 1S-2S transition of atomic hydrogen for optogalvanic detection and laser induced fluorescence (LIF). A precise knowledge of the spatial resolution is crucial for a correct interpretation of measurements, if the plasma parameters to be analysed undergo strong spatial variations. The present study is based on a novel approach which provides a reliable and realistic determination of the spatial resolution. Measured irradiance distribution of laser beam waists in the overlap volume, provided by a high resolution UV camera, are employed to resolve coupled rate equations accounting for two-photon excitation, fluorescence decay and ionization. The resulting three-dimensional yield distributions reveal in detail the spatial resolution for optogalvanic and LIF detection and related saturation due to depletion. Two-photon absorption profiles broader than the Fourier transform-limited laser bandwidth are also incorporated in the calculations. The approach allows an accurate analysis of the spatial resolution present in recent and future measurements.

  13. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  14. Point card compatible automatic vending machine for canned drink; Point card taio kan jido hanbaiki

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-01-10

    A point card compatible automatic vending machine for canned drinks is developed, which provides drink manufacturers with a powerful tool to acquire selling sites and attract consumers. Since the machine is equipped with a device to handle point cards, regular customers have increased and sales have picked up. A point card issuing device is also installed, and the new machine issues a point card whenever a customer wants. The drink manufacturers are evaluating high of the vending machine because it will contribute to the diffusion of the point card system and because a sales promotion campaign may be conducted through the vending machine for instance by exchanging a fully marked card with a giveaway on the spot. In the future, a bill validator (paper money identifier) will be integrated even with small size machines for the diffusion of point card compatible machines. (translated by NEDO)

  15. DISCRETE FIXED POINT THEOREMS AND THEIR APPLICATION TO NASH EQUILIBRIUM

    OpenAIRE

    Sato, Junichi; Kawasaki, Hidefumi

    2007-01-01

    Fixed point theorems are powerful tools in not only mathematics but also economic. In some economic problems, we need not real-valued but integer-valued equilibriums. However, classical fixed point theorems guarantee only real-valued equilibria. So we need discrete fixed point theorems in order to get discrete equilibria. In this paper, we first provide discrete fixed point theorems, next apply them to a non-cooperative game and prove the existence of a Nash equilibrium of pure strategies.

  16. PowerPoint 2007 for Dummies

    CERN Document Server

    Lowe, Doug

    2007-01-01

    New and inexperienced PowerPoint users will discover how to use the latest enhancements to PowerPoint 2007 quickly and efficiently so that they can produce unique and informative presentations PowerPoint continues to be the world's most popular presentation software This updated For Dummies guide shows users different ways to create powerful and effective slideshow presentations that incorporate data from other applications in the form of charts, clip art, sound, and video Shares the key features of PowerPoint 2007 including creating and editing slides, working with hyperlinks and action butt

  17. POINT CLOUD DERIVED FROMVIDEO FRAMES: ACCURACY ASSESSMENT IN RELATION TO TERRESTRIAL LASER SCANNINGAND DIGITAL CAMERA DATA

    Directory of Open Access Journals (Sweden)

    P. Delis

    2017-02-01

    Full Text Available The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object and Sony NEX-VG10 E (for the historic building. In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  18. Method to minimize the low-frequency neutral-point voltage oscillations with time-offset injection for neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede

    2013-01-01

    This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...

  19. Indexing Moving Points

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2003-01-01

    We propose three indexing schemes for storing a set S of N points in the plane, each moving along a linear trajectory, so that any query of the following form can be answered quickly: Given a rectangle R and a real value t, report all K points of S that lie inside R at time t. We first present an...

  20. Critical Points of Contact

    DEFF Research Database (Denmark)

    Jensen, Ole B.; Wind, Simon; Lanng, Ditte Bendix

    2012-01-01

    In this brief article, we shall illustrate the application of the analytical and interventionist concept of ‘Critical Points of Contact’ (CPC) through a number of urban design studios. The notion of CPC has been developed over a span of the last three to four years and is reported in more detail...... elsewhere (Jensen & Morelli 2011). In this article, we will only discuss the conceptual and theoretical framing superficially, since our real interest is to show and discuss the concept's application value to spatial design in a number of urban design studios. The 'data' or the projects presented are seven...... in urban design at Aalborg University, where urban design consists of both an analytical and an interventionist field of operation. Furthermore, the content of the CPC concept links to research in mobilities, the network city, and urban design. These are among the core pillars of both the masters programme...

  1. Beginning SharePoint 2010 Development

    CERN Document Server

    Fox, Steve

    2010-01-01

    Discover how to take advantage of the many new features in SharePoint 2010. SharePoint provides content management (enterprise content management, Web content management, records management, and more), workflow, and social media features, and the new version boasts enhanced capabilities. This introductory-level book walks you through the process of learning, developing, and deploying SharePoint 2010 solutions. You'll leverage your existing skills and tools to grasp the fundamental programming concepts and practices of SharePoint 2010. The author clearly explains how to develop your first appli

  2. The power of PowerPoint.

    Science.gov (United States)

    Niamtu , J

    2001-08-01

    Carousel slide presentations have been used for academic and clinical presentations since the late 1950s. However, advances in computer technology have caused a paradigm shift, and digital presentations are quickly becoming standard for clinical presentations. The advantages of digital presentations include cost savings; portability; easy updating capability; Internet access; multimedia functions, such as animation, pictures, video, and sound; and customization to augment audience interest and attention. Microsoft PowerPoint has emerged as the most popular digital presentation software and is currently used by many practitioners with and without significant computer expertise. The user-friendly platform of PowerPoint enables even the novice presenter to incorporate digital presentations into his or her profession. PowerPoint offers many advanced options that, with a minimal investment of time, can be used to create more interactive and professional presentations for lectures, patient education, and marketing. Examples of advanced PowerPoint applications are presented in a stepwise manner to unveil the full power of PowerPoint. By incorporating these techniques, medical practitioners can easily personalize, customize, and enhance their PowerPoint presentations. Complications, pitfalls, and caveats are discussed to detour and prevent misadventures in digital presentations. Relevant Web sites are listed to further update, customize, and communicate PowerPoint techniques.

  3. Dew point vs bubble point : a misunderstood constraint on gravity drainage processes

    Energy Technology Data Exchange (ETDEWEB)

    Nenninger, J. [N-Solv Corp., Calgary, AB (Canada); Gunnewiek, L. [Hatch Ltd., Mississauga, ON (Canada)

    2009-07-01

    This study demonstrated that gravity drainage processes that use blended fluids such as solvents have an inherently unstable material balance due to differences between dew point and bubble point compositions. The instability can lead to the accumulation of volatile components within the chamber, and impair mass and heat transfer processes. Case studies were used to demonstrate the large temperature gradients within the vapour chamber caused by temperature differences between the bubble point and dew point for blended fluids. A review of published data showed that many experiments on in-situ processes do not account for unstable material balances caused by a lack of steam trap control. A study of temperature profiles during steam assisted gravity drainage (SAGD) studies showed significant temperature depressions caused by methane accumulations at the outside perimeter of the steam chamber. It was demonstrated that the condensation of large volumes of purified solvents provided an efficient mechanism for the removal of methane from the chamber. It was concluded that gravity drainage processes can be optimized by using pure propane during the injection process. 22 refs., 1 tab., 18 figs.

  4. Geodetic Control Points - Multi-State Control Point Database

    Data.gov (United States)

    NSGIC State | GIS Inventory — The Multi-State Control Point Database (MCPD) is a database of geodetic and mapping control covering Idaho and Montana. The control were submitted by registered land...

  5. Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud

    Science.gov (United States)

    Yao, C.; Zhang, X.; Liu, H.

    2017-09-01

    The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.

  6. Solving discrete zero point problems

    NARCIS (Netherlands)

    van der Laan, G.; Talman, A.J.J.; Yang, Z.F.

    2004-01-01

    In this paper an algorithm is proposed to .nd a discrete zero point of a function on the collection of integral points in the n-dimensional Euclidean space IRn.Starting with a given integral point, the algorithm generates a .nite sequence of adjacent integral simplices of varying dimension and

  7. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Jianhua Ni

    2016-08-01

    Full Text Available The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.

  8. Micro Coronal Bright Points Observed in the Quiet Magnetic Network by SOHO/EIT

    Science.gov (United States)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.

    1997-01-01

    When one looks at SOHO/EIT Fe XII images of quiet regions, one can see the conventional coronal bright points (> 10 arcsec in diameter), but one will also notice many smaller faint enhancements in brightness (Figure 1). Do these micro coronal bright points belong to the same family as the conventional bright points? To investigate this question we compared SOHO/EIT Fe XII images with Kitt Peak magnetograms to determine whether the micro bright points are in the magnetic network and mark magnetic bipoles within the network. To identify the coronal bright points, we applied a picture frame filter to the Fe XII images; this brings out the Fe XII network and bright points (Figure 2) and allows us to study the bright points down to the resolution limit of the SOHO/EIT instrument. This picture frame filter is a square smoothing function (hlargelyalf a network cell wide) with a central square (quarter of a network cell wide) removed so that a bright point's intensity does not effect its own background. This smoothing function is applied to the full disk image. Then we divide the original image by the smoothed image to obtain our filtered image. A bright point is defined as any contiguous set of pixels (including diagonally) which have enhancements of 30% or more above the background; a micro bright point is any bright point 16 pixels or smaller in size. We then analyzed the bright points that were fully within quiet regions (0.6 x 0.6 solar radius) centered on disk center on six different days.

  9. Accounting for professionalism: an innovative point system to assess resident professionalism

    Directory of Open Access Journals (Sweden)

    Gary L. Malakoff

    2014-04-01

    Full Text Available Background: Professionalism is a core competency for residency required by the Accreditation Council of Graduate Medical Education. We sought a means to objectively assess professionalism among internal medicine and transitional year residents. Innovation: We established a point system to document unprofessional behaviors demonstrated by internal medicine and transitional year residents along with opportunities to redeem such negative points by deliberate positive professional acts. The intent of the policy is to assist residents in becoming aware of what constitutes unprofessional behavior and to provide opportunities for remediation by accruing positive points. A committee of core faculty and department leadership including the program director and clinic nurse manager determines professionalism points assigned. Negative points might be awarded for tardiness to mandatory or volunteered for events without a valid excuse, late evaluations or other paperwork required by the department, non-attendance at meetings prepaid by the department, and inappropriate use of personal days or leave. Examples of actions through which positive points can be gained to erase negative points include delivery of a mentored pre-conference talk, noon conference, medical student case/shelf review session, or a written reflection. Results: Between 2009 and 2012, 83 residents have trained in our program. Seventeen categorical internal medicine and two transitional year residents have been assigned points. A total of 55 negative points have been assigned and 19 points have been remediated. There appears to be a trend of fewer negative points and more positive points being assigned over each of the past three academic years. Conclusion: Commitment to personal professional behavior is a lifelong process that residents must commit to during their training. A professionalism policy, which employs a point system, has been instituted in our programs and may be a novel tool to

  10. Dew Point

    OpenAIRE

    Goldsmith, Shelly

    1999-01-01

    Dew Point was a solo exhibition originating at PriceWaterhouseCoopers Headquarters Gallery, London, UK and toured to the Centre de Documentacio i Museu Textil, Terrassa, Spain and Gallery Aoyama, Tokyo, Japan.

  11. On the co-alignment of solar telescopes. A new approach to solar pointing

    International Nuclear Information System (INIS)

    Staiger, J

    2013-01-01

    Helioseismological measurements require long observing times and thus may be adversely affected by lateral image drifts as caused by pointing instabilities. At the Vacuum Tower Telescope VTT, Tenerife we have recorded drift values of up to 5'' per hour under unstable thermal conditions (dome opening, strong day-to-day thermal gradients). Typically drifts of 0.5'' – 1.0'' per hour may be encountered under more favorable conditions. Past experience has shown that most high-resolution solar telescopes may be affected by this problem to some degree. This inherent shortcoming of solar pointing is caused by the fact that the guiding loop can be closed only within the guiding beam but not within the telescope's main beam. We have developed a new approach to this problem. We correlate continuum brightness patterns observed from within the telescope main beam with patterns originating from a full disk telescope. We show that brightness patterns of sufficient size are unique with respect to solar location at any instant of time and may serve as a location identifier. We make use of the fact that averaged location information of solar structures is invariant with respect to telescope resolution. We have carried out tests at the VTT together with SDO. We have used SDO as a full disk reference. We were able to reduce lateral image drifts by an order of magnitude.

  12. SharePoint 2010 Six-in-One

    CERN Document Server

    Geier, Chris; Bertram, Becky

    2011-01-01

    A team of SharePoint authorities addresses the six most essential areas of SharePoint 2010. SharePoint enables Web sites to host shared workspaces and is a leading solution for Enterprise Content Management. This book serves as one-stop shopping for concise coverage on six key areas that you need to know in order to get up and running with SharePoint 2010 quickly. After an introduction to the new features of SharePoint 2010, the author team of SharePoint experts walk you through branding and customization, workflow, business connectivity services, social networking and tools, the search functi

  13. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition

    Directory of Open Access Journals (Sweden)

    Abdulla Al-Rawabdeh

    2016-01-01

    Full Text Available Landslides often cause economic losses, property damage, and loss of lives. Monitoring landslides using high spatial and temporal resolution imagery and the ability to quickly identify landslide regions are the basis for emergency disaster management. This study presents a comprehensive system that uses unmanned aerial vehicles (UAVs and Semi-Global dense Matching (SGM techniques to identify and extract landslide scarp data. The selected study area is located along a major highway in a mountainous region in Jordan, and contains creeping landslides induced by heavy rainfall. Field observations across the slope body and a deformation analysis along the highway and existing gabions indicate that the slope is active and that scarp features across the slope will continue to open and develop new tension crack features, leading to the downward movement of rocks. The identification of landslide scarps in this study was performed via a dense 3D point cloud of topographic information generated from high-resolution images captured using a low-cost UAV and a target-based camera calibration procedure for a low-cost large-field-of-view camera. An automated approach was used to accurately detect and extract the landslide head scarps based on geomorphological factors: the ratio of normalized Eigenvalues (i.e., λ1/λ2 ≥ λ3 derived using principal component analysis, topographic surface roughness index values, and local-neighborhood slope measurements from the 3D image-based point cloud. Validation of the results was performed using root mean square error analysis and a confusion (error matrix between manually digitized landslide scarps and the automated approaches. The experimental results using the fully automated 3D point-based analysis algorithms show that these approaches can effectively distinguish landslide scarps. The proposed algorithms can accurately identify and extract landslide scarps with centimeter-scale accuracy. In addition, the combination

  14. Considerations for Achieving Cross-Platform Point Cloud Data Fusion across Different Dryland Ecosystem Structural States.

    Science.gov (United States)

    Swetnam, Tyson L; Gillan, Jeffrey K; Sankey, Temuulen T; McClaran, Mitchel P; Nichols, Mary H; Heilman, Philip; McVay, Jason

    2017-01-01

    Remotely sensing recent growth, herbivory, or disturbance of herbaceous and woody vegetation in dryland ecosystems requires high spatial resolution and multi-temporal depth. Three dimensional (3D) remote sensing technologies like lidar, and techniques like structure from motion (SfM) photogrammetry, each have strengths and weaknesses at detecting vegetation volume and extent, given the instrument's ground sample distance and ease of acquisition. Yet, a combination of platforms and techniques might provide solutions that overcome the weakness of a single platform. To explore the potential for combining platforms, we compared detection bias amongst two 3D remote sensing techniques (lidar and SfM) using three different platforms [ground-based, small unmanned aerial systems (sUAS), and manned aircraft]. We found aerial lidar to be more accurate for characterizing the bare earth (ground) in dense herbaceous vegetation than either terrestrial lidar or aerial SfM photogrammetry. Conversely, the manned aerial lidar did not detect grass and fine woody vegetation while the terrestrial lidar and high resolution near-distance (ground and sUAS) SfM photogrammetry detected these and were accurate. UAS SfM photogrammetry at lower spatial resolution under-estimated maximum heights in grass and shrubs. UAS and handheld SfM photogrammetry in near-distance high resolution collections had similar accuracy to terrestrial lidar for vegetation, but difficulty at measuring bare earth elevation beneath dense herbaceous cover. Combining point cloud data and derivatives (i.e., meshes and rasters) from two or more platforms allowed for more accurate measurement of herbaceous and woody vegetation (height and canopy cover) than any single technique alone. Availability and costs of manned aircraft lidar collection preclude high frequency repeatability but this is less limiting for terrestrial lidar, sUAS and handheld SfM. The post-processing of SfM photogrammetry data became the limiting factor

  15. Pointing stability of Hinode and requirements for the next Solar mission Solar-C

    Science.gov (United States)

    Katsukawa, Y.; Masada, Y.; Shimizu, T.; Sakai, S.; Ichimoto, K.

    2017-11-01

    It is essential to achieve fine pointing stability in a space mission aiming for high resolutional observations. In a future Japanese solar mission SOLAR-C, which is a successor of the HINODE (SOLAR-B) mission, we set targets of angular resolution better than 0.1 arcsec in the visible light and better than 0.2 - 0.5 arcsec in EUV and X-rays. These resolutions are twice to five times better than those of corresponding instruments onboard HINODE. To identify critical items to achieve the requirements of the pointing stability in SOLAR-C, we assessed in-flight performance of the pointing stability of HINODE that achieved the highest pointing stability in Japanese space missions. We realized that one of the critical items that have to be improved in SOLAR-C is performance of the attitude stability near the upper limit of the frequency range of the attitude control system. The stability of 0.1 arcsec (3σ) is required in the EUV and X-ray telescopes of SOLAR-C while the HINODE performance is slightly worse than the requirement. The visible light telescope of HINODE is equipped with an image stabilization system inside the telescope, which achieved the stability of 0.03 arcsec (3σ) by suppressing the attitude jitter in the frequency range lower than 10 Hz. For further improvement, it is expected to suppress disturbances induced by resonance between the telescope structures and disturbances of momentum wheels and mechanical gyros in the frequency range higher than 100 Hz.

  16. Pair of Exceptional Points in a Microdisk Cavity under an Extremely Weak Deformation

    Science.gov (United States)

    Yi, Chang-Hwan; Kullig, Julius; Wiersig, Jan

    2018-03-01

    One of the interesting features of open quantum and wave systems is the non-Hermitian degeneracy called an exceptional point, where not only energy levels but also the corresponding eigenstates coalesce. We demonstrate that such a degeneracy can appear in optical microdisk cavities by deforming the boundary extremely weakly. This surprising finding is explained by a semiclassical theory of dynamical tunneling. It is shown that the exceptional points come in nearly degenerated pairs, originating from the different symmetry classes of modes. A spatially local chirality of modes at the exceptional point is related to vortex structures of the Poynting vector.

  17. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  18. A new PET detector concept for compact preclinical high-resolution hybrid MR-PET

    Science.gov (United States)

    Berneking, Arne; Gola, Alberto; Ferri, Alessandro; Finster, Felix; Rucatti, Daniele; Paternoster, Giovanni; Jon Shah, N.; Piemonte, Claudio; Lerche, Christoph

    2018-04-01

    This work presents a new PET detector concept for compact preclinical hybrid MR-PET. The detector concept is based on Linearly-Graded SiPM produced with current FBK RGB-HD technology. One 7.75 mm x 7.75 mm large sensor chip is coupled with optical grease to a black coated 8 mm x 8 mm large and 3 mm thick monolithic LYSO crystal. The readout is obtained from four readout channels with the linear encoding based on integrated resistors and the Center of Gravity approach. To characterize the new detector concept, the spatial and energy resolutions were measured. Therefore, the measurement setup was prepared to radiate a collimated beam to 25 different points perpendicular to the monolithic scintillator crystal. Starting in the center point of the crystal at 0 mm / 0 mm and sampling a grid with a pitch of 1.75 mm, all significant points of the detector were covered by the collimator beam. The measured intrinsic spatial resolution (FWHM) was 0.74 +/- 0.01 mm in x- and 0.69 +/- 0.01 mm in the y-direction at the center of the detector. At the same point, the measured energy resolution (FWHM) was 13.01 +/- 0.05 %. The mean intrinsic spatial resolution (FWHM) over the whole detector was 0.80 +/- 0.28 mm in x- and 0.72 +/- 0.19 mm in y-direction. The energy resolution (FWHM) of the detector was between 13 and 17.3 % with an average energy resolution of 15.7 +/- 1.0 %. Due to the reduced thickness, the sensitivity of this gamma detector is low but still higher than pixelated designs with the same thickness due to the monolithic crystals. Combining compact design, high spatial resolution, and high sensitivity, the detector concept is particularly suitable for applications where the scanner bore size is limited and high resolution is required - as is the case in small animal hybrid MR-PET.

  19. Geomorphic tipping points: convenient metaphor or fundamental landscape property?

    Science.gov (United States)

    Lane, Stuart

    2016-04-01

    In 2000 Malcolm Gladwell published as book that has done much to publicise Tipping Points in society but also in academia. His arguments, re-expressed in a geomorphic sense, have three core elements: (1) a "Law of the Few", where rapid change results from the effects of a relatively restricted number of critical elements, ones that are able to rapidly connect systems together, that are particularly sensitive to an external force, of that are spatially organised in a particular way; (2) a "Stickiness" where an element of the landscape is able to assimilate characteristics which make it progressively more applicable to the "Law of the Few"; and (3), given (1) and (2) a history and a geography that means that the same force can have dramatically different effects, according to where and when it occurs. Expressed in this way, it is not clear that Tipping Points bring much to our understanding in geomorphology that existing concepts (e.g. landscape sensitivity and recovery; cusp-catastrophe theory; non-linear dynamics systems) do not already provide. It may also be all too easy to describe change in geomorphology as involving a Tipping Point: we know that geomorphic processes often involve a non-linear response above a certain critical threshold; we know that landscapes can, after Denys Brunsden, be though of as involving long periods of boredom ("stability") interspersed with brief moments of terror ("change"); but these are not, after Gladwell, sufficient for the term Tipping Point to apply. Following from these issues, this talk will address three themes. First, it will question, through reference to specific examples, notably in high Alpine systems, the extent to which the Tipping Point analogy is truly a property of the world in which we live. Second, it will explore how 'tipping points' become assigned metaphorically, sometimes evolving to the point that they themselves gain agency, that is, shaping the way we interpret landscape rather than vice versa. Third, I

  20. SharePoint 2007 Collaboration For Dummies

    CERN Document Server

    Harvey, Greg

    2009-01-01

    If you're looking for a way to help your teams access what they need to know, work together, and get the job done, SharePoint can do just that. SharePoint 2007 Collaboration For Dummies shows you the easiest way to set up and customize SharePoint, manage your data, interact using SharePoint blogs and wikis, integrate Office programs, and make your office more productive. You'll learn what SharePoint can do and how to make it work for your business, understand the technical terms, and enable your people to collaborate on documents and spreadsheets. You'll even discover how to get SharePoint hel

  1. Attention flexibly trades off across points in time.

    Science.gov (United States)

    Denison, Rachel N; Heeger, David J; Carrasco, Marisa

    2017-08-01

    Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We concluded that voluntary attention is limited, and selective, across time.

  2. Professional SharePoint 2013 development

    CERN Document Server

    Alirezaei, Reza; Ranlett, Matt; Hillier, Scot; Wilson, Brian; Fried, Jeff; Swider, Paul

    2013-01-01

    Thorough coverage of development in SharePoint 2013 A team of well-known Microsoft MVPs joins forces in this fully updated resource, providing you with in-depth coverage of development tools in the latest iteration of the immensely popular SharePoint. From building solutions to building custom workflow and content management applications, this book shares field-tested best practices on all aspect of SharePoint 2013 development. Offers a thorough look at Windows Azure and SharePoint 2013Includes new chapters on Application Life Cycle Management, developing apps in ShareP

  3. Validation of novel calibration scheme with traceable point-like (22)Na sources on six types of PET scanners.

    Science.gov (United States)

    Hasegawa, Tomoyuki; Oda, Keiichi; Wada, Yasuhiro; Sasaki, Toshiaki; Sato, Yasushi; Yamada, Takahiro; Matsumoto, Mikio; Murayama, Hideo; Kikuchi, Kei; Miyatake, Hiroki; Abe, Yutaka; Miwa, Kenta; Akimoto, Kenta; Wagatsuma, Kei

    2013-05-01

    To improve the reliability and convenience of the calibration procedure of positron emission tomography (PET) scanners, we have been developing a novel calibration path based on traceable point-like sources. When using (22)Na sources, special care should be taken to avoid the effects of 1.275-MeV γ rays accompanying β (+) decays. The purpose of this study is to validate this new calibration scheme with traceable point-like (22)Na sources on various types of PET scanners. Traceable point-like (22)Na sources with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons were used. The tested PET scanners included a clinical whole-body PET scanner, four types of clinical PET/CT scanners from different manufacturers, and a small-animal PET scanner. The region of interest (ROI) diameter dependence of ROI values was represented with a fitting function, which was assumed to consist of a recovery part due to spatial resolution and a quadratic background part originating from the scattered γ rays. The observed ROI radius dependence was well represented with the assumed fitting function (R (2) > 0.994). The calibration factors determined using the point-like sources were consistent with those by the standard cross-calibration method within an uncertainty of ±4 %, which was reasonable considering the uncertainty in the standard cross-calibration method. This novel calibration scheme based on the use of traceable (22)Na point-like sources was successfully validated for six types of commercial PET scanners.

  4. One-dimensional gravity in infinite point distributions

    Science.gov (United States)

    Gabrielli, A.; Joyce, M.; Sicard, F.

    2009-10-01

    The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by “Jeans swindle” for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from “shuffled lattice” initial conditions. These show qualitative properties of the evolution (notably its “self-similarity”) like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.

  5. Photoacoustic Point Source

    International Nuclear Information System (INIS)

    Calasso, Irio G.; Craig, Walter; Diebold, Gerald J.

    2001-01-01

    We investigate the photoacoustic effect generated by heat deposition at a point in space in an inviscid fluid. Delta-function and long Gaussian optical pulses are used as sources in the wave equation for the displacement potential to determine the fluid motion. The linear sound-generation mechanism gives bipolar photoacoustic waves, whereas the nonlinear mechanism produces asymmetric tripolar waves. The salient features of the photoacoustic point source are that rapid heat deposition and nonlinear thermal expansion dominate the production of ultrasound

  6. The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP

    CERN Document Server

    Gigi, Dominique; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; da Silva Gomes, Diego; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gladki, Maciej; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre; Janulis, Mindaugas; Lettrich, Michael; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius K; Morovic, Srecko; O'Dell, Vivian; Orn, Samuel Johan; Orsini, Luciano; Papakrivopoulos, Ioannis; Paus, Christoph; Petrova, Petia; Petrucci, Andrea; Pieri, Marco; Rabady, Dinyar; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Vazquez Velez, Cristina; Vougioukas, Michail; Zejdl, Petr

    2017-01-01

    In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation.

  7. On estimation of the intensity function of a point process

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2010-01-01

    Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and

  8. Making Sense of Boiling Points and Melting Points

    Indian Academy of Sciences (India)

    GENERAL | ARTICLE. The boiling and melting points of a pure substance are char- ... bonds, which involves high energy and hence high temperatures. Among the .... with zero intermolecular force at all temperatures and pressures, which ...

  9. Calorimetry end-point predictions

    International Nuclear Information System (INIS)

    Fox, M.A.

    1981-01-01

    This paper describes a portion of the work presently in progress at Rocky Flats in the field of calorimetry. In particular, calorimetry end-point predictions are outlined. The problems associated with end-point predictions and the progress made in overcoming these obstacles are discussed. The two major problems, noise and an accurate description of the heat function, are dealt with to obtain the most accurate results. Data are taken from an actual calorimeter and are processed by means of three different noise reduction techniques. The processed data are then utilized by one to four algorithms, depending on the accuracy desired to determined the end-point

  10. Harmful Algal Bloom Characterization at Ultra-High Spatial and Temporal Resolution Using Small Unmanned Aircraft Systems

    Directory of Open Access Journals (Sweden)

    Deon Van der Merwe

    2015-03-01

    Full Text Available Harmful algal blooms (HABs degrade water quality and produce toxins. The spatial distribution of HAbs may change rapidly due to variations wind, water currents, and population dynamics. Risk assessments, based on traditional sampling methods, are hampered by the sparseness of water sample data points, and delays between sampling and the availability of results. There is a need for local risk assessment and risk management at the spatial and temporal resolution relevant to local human and animal interactions at specific sites and times. Small, unmanned aircraft systems can gather color-infrared reflectance data at appropriate spatial and temporal resolutions, with full control over data collection timing, and short intervals between data gathering and result availability. Data can be interpreted qualitatively, or by generating a blue normalized difference vegetation index (BNDVI that is correlated with cyanobacterial biomass densities at the water surface, as estimated using a buoyant packed cell volume (BPCV. Correlations between BNDVI and BPCV follow a logarithmic model, with r2-values under field conditions from 0.77 to 0.87. These methods provide valuable information that is complimentary to risk assessment data derived from traditional risk assessment methods, and could help to improve risk management at the local level.

  11. 18 CFR 157.211 - Delivery points.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Delivery points. 157... for Certain Transactions and Abandonment § 157.211 Delivery points. (a) Construction and operation—(1... delivery point, excluding the construction of certain delivery points subject to the prior notice...

  12. Wolf Point Substation, Roosevelt County, Montana

    International Nuclear Information System (INIS)

    1991-05-01

    The Western Area Power Administration (Western), an agency of the United States Department of Energy, is proposing to construct the 115-kV Wolf Point Substation near Wolf Point in Roosevelt County, Montana (Figure 1). As part of the construction project, Western's existing Wolf Point Substation would be taken out of service. The existing 115-kV Wolf Point Substation is located approximately 3 miles west of Wolf Point, Montana (Figure 2). The substation was constructed in 1949. The existing Wolf Point Substation serves as a ''Switching Station'' for the 115-kV transmission in the region. The need for substation improvements is based on operational and reliability issues. For this environmental assessment (EA), the environmental review of the proposed project took into account the removal of the old Wolf Point Substation, rerouting of the five Western lines and four lines from the Cooperatives and Montana-Dakota Utilities Company, and the new road into the proposed substation. Reference to the new proposed Wolf Point Substation in the EA includes these facilities as well as the old substation site. The environmental review looked at the impacts to all resource areas in the Wolf Point area. 7 refs., 6 figs

  13. Equilibrium points of the tilted perfect fluid Bianchi VIh state space

    Science.gov (United States)

    Apostolopoulos, Pantelis S.

    2005-05-01

    We present the full set of evolution equations for the spatially homogeneous cosmologies of type VIh filled with a tilted perfect fluid and we provide the corresponding equilibrium points of the resulting dynamical state space. It is found that only when the group parameter satisfies h > -1 a self-similar solution exists. In particular we show that for h > -{1/9} there exists a self-similar equilibrium point provided that γ ∈ ({2(3+sqrt{-h})/5+3sqrt{-h}},{3/2}) whereas for h VIh.

  14. Acquisition, tracking, and pointing III; Proceedings of the Meeting, Orlando, FL, Mar. 27-29, 1989

    Science.gov (United States)

    Gowrinathan, Sankaran

    1989-09-01

    The present conference on components and sensors, image processing algorithms, and astronomical applications for pointing and tracking gives attention to a CCD daylight stellar sensor, an optical coordinate transfer assembly for precision boresight applications, a grating carousel mechanism for the HST high resolution spectrograph, an IR antiship-seeker simulator, line-of-sight stabilization using image motion compensation, the effects of illumination beam jitter on photodetection statistics, and the enhancement of armored vehicle fire control performance. Also discussed are active angular tracking with a photon-bucket, moving target estimation with autodyne detection, multiresolution object detection and segmentation, a beacon tracker and point-ahead system for optical communications, a precision-pointing mechanism for intersatellite optical communication, high-precision lunar tracking for laser ranging, multimirror beam control, and fundamental limits in the resolution of double-star targets.

  15. Fractional charge and inter-Landau-level states at points of singular curvature.

    Science.gov (United States)

    Biswas, Rudro R; Son, Dam Thanh

    2016-08-02

    The quest for universal properties of topological phases is fundamentally important because these signatures are robust to variations in system-specific details. Aspects of the response of quantum Hall states to smooth spatial curvature are well-studied, but challenging to observe experimentally. Here we go beyond this prevailing paradigm and obtain general results for the response of quantum Hall states to points of singular curvature in real space; such points may be readily experimentally actualized. We find, using continuum analytical methods, that the point of curvature binds an excess fractional charge and sequences of quantum states split away, energetically, from the degenerate bulk Landau levels. Importantly, these inter-Landau-level states are bound to the topological singularity and have energies that are universal functions of bulk parameters and the curvature. Our exact diagonalization of lattice tight-binding models on closed manifolds demonstrates that these results continue to hold even when lattice effects are significant. An important technological implication of these results is that these inter-Landau-level states, being both energetically and spatially isolated quantum states, are promising candidates for constructing qubits for quantum computation.

  16. WHO Standard Acupuncture Point Locations

    Directory of Open Access Journals (Sweden)

    Sabina Lim

    2010-01-01

    Full Text Available ‘WHO Standard Acupuncture Point Locations in the Western Pacific Region (WHO Standard was released in 2008. Initially, there were 92/361 controversial acupuncture points (acupoints. Through seven informal consultations and four task force team meetings, 86 points were agreed upon among the 92 controversial acupoints, leaving 6 remaining controversial acupoints, demanding active research in the future. This will enhance the reproducibility and validity of acupuncture studies. It will also lead to a better understanding of acupuncture mechanisms in order to optimize its clinical efficacy for a range of diseases and syndromes. This book has two parts: General Guidelines for Acupuncture Point Locations and WHO Standard Acupuncture Point Locations. First of all, familiarity with the General Guidelines for Acupuncture Point Locations in this book can help the reader to understand and use the contents of this book in depth. I would like to thank all of the participating experts and scholars for this great work, who have overcome the limits of previous acupuncture references. I also appreciate the dedicated effort and harmonious leadership of Dr Choi Seung-hoon, former Regional Adviser in Traditional Medicine of Western Pacific Office, WHO.

  17. SCAP-82, Single Scattering, Albedo Scattering, Point-Kernel Analysis in Complex Geometry

    International Nuclear Information System (INIS)

    Disney, R.K.; Vogtman, S.E.

    1987-01-01

    1 - Description of problem or function: SCAP solves for radiation transport in complex geometries using the single or albedo scatter point kernel method. The program is designed to calculate the neutron or gamma ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user specified discrete scattering volume. Geometry is describable by zones bounded by intersecting quadratic surfaces within an arbitrary maximum number of boundary surfaces per zone. Anisotropic point sources are describable as pointwise energy dependent distributions of polar angles on a meridian; isotropic point sources may also be specified. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a build- up factor approximation to account for multiple scatter on the scat- ter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as a distribution of isotropic point sources, with un-collided line-of-sight attenuation and buildup calculated between each source point and the detector point. 2 - Method of solution: A point kernel method using an anisotropic or isotropic point source representation is used, line-of-sight material attenuation and inverse square spatial attenuation between the source point and scatter points and the scatter points and detector point is employed. A direct summation of individual point source results is obtained. 3 - Restrictions on the complexity of the problem: - The SCAP program is written in complete flexible dimensioning so that no restrictions are imposed on the number of energy groups or geometric zones. The geometric zone description is restricted to zones defined by boundary surfaces defined by the general quadratic equation or one of its degenerate forms. The only restriction in the program is that the total

  18. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    Science.gov (United States)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  19. Maintenance of equilibrium point control during an unexpectedly loaded rapid limb movement.

    Science.gov (United States)

    Simmons, R W; Richardson, C

    1984-06-08

    Two experiments investigated whether the equilibrium point hypothesis or the mass-spring model of motor control subserves positioning accuracy during spring loaded, rapid, bi-articulated movement. For intact preparations, the equilibrium point hypothesis predicts response accuracy to be determined by a mixture of afferent and efferent information, whereas the mass-spring model predicts positioning to be under a direct control system. Subjects completed a series of load-resisted training trials to a spatial target. The magnitude of a sustained spring load was unexpectedly increased on selected trials. Results indicated positioning accuracy and applied force varied with increases in load, which suggests that the original efferent commands are modified by afferent information during the movement as predicted by the equilibrium point hypothesis.

  20. Particle detector spatial resolution

    International Nuclear Information System (INIS)

    Perez-Mendez, V.

    1992-01-01

    Method and apparatus for producing separated columns of scintillation layer material, for use in detection of X-rays and high energy charged particles with improved spatial resolution is disclosed. A pattern of ridges or projections is formed on one surface of a substrate layer or in a thin polyimide layer, and the scintillation layer is grown at controlled temperature and growth rate on the ridge-containing material. The scintillation material preferentially forms cylinders or columns, separated by gaps conforming to the pattern of ridges, and these columns direct most of the light produced in the scintillation layer along individual columns for subsequent detection in a photodiode layer. The gaps may be filled with a light-absorbing material to further enhance the spatial resolution of the particle detector. 12 figs

  1. Spectrography analysis of stainless steel by the point to point technique

    International Nuclear Information System (INIS)

    Bona, A.

    1986-01-01

    A method for the determination of the elements Ni, Cr, Mn, Si, Mo, Nb, Cu, Co and V in stainless steel by emission spectrographic analysis using high voltage spark sources is presented. The 'point-to-point' technique is employed. The experimental parameters were optimized taking account a compromise between the detection sensitivity and the precision of the measurement. The parameters investigated were the high voltage capacitance, the inductance, the analytical and auxiliary gaps, the period of pre burn spark and the time of exposure. The edge shape of the counter electrodes and the type of polishing and diameter variation of the stailess steel eletrodes were evaluated in preliminary assays. In addition the degradation of the chemical power of the developer was also investigated. Counter electrodes of graphite, copper, aluminium and iron were employed and the counter electrode itself was used as an internal standard. In the case of graphite counter electrodes the iron lines were employed as internal standard. The relative errors were the criteria for evaluation of these experiments. The National Bureau of Standards - Certified reference stainless steel standards and the Eletrometal Acos Finos S.A. samples (certified by the supplier) were employed for drawing in the calibration systems and analytical curves. The best results were obtained using the convencional graphite counter electrodes. The inaccuracy and the imprecision of the proposed method varied from 2% to 15% and from 1% to 9% respectively. This present technique was compared to others instrumental techniques such as inductively coupled plasma, X-ray fluorescence and neutron activation analysis. The advantages and disadvantages for each case were discussed. (author) [pt

  2. New England observed and predicted August stream/river temperature daily range points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted August stream/river temperature daily ranges in New England based on a spatial statistical...

  3. New England observed and predicted growing season maximum stream/river temperature points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted growing season maximum stream/river temperatures in New England based on a spatial statistical...

  4. Hinkley Point 'C' power station public inquiry: proof of evidence on comparison of non-fossil options to Hinkley Point 'C'

    International Nuclear Information System (INIS)

    Goddard, S.C.

    1988-09-01

    A public inquiry has been set up to examine the planning application made by the Central Electricity Generating Board (CEGB) for the construction of a 1200 MW Pressurized Water Reactor power station at Hinkley Point (Hinkley Point ''C'') in the United Kingdom. This evidence to the Inquiry sets out and explains the non-fossil fuel options, with particular reference to renewable energy sources and other PWR locations; gives feasibility, capital cost, performance and total resource estimates for the renewable sources; and shows that no other non-fossil fuel source is to be preferred to Hinkley Point ''C''. (author)

  5. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  6. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  7. Tipping Point

    Science.gov (United States)

    ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  8. Quantum Triple Point and Quantum Critical End Points in Metallic Magnets.

    Science.gov (United States)

    Belitz, D; Kirkpatrick, T R

    2017-12-29

    In low-temperature metallic magnets, ferromagnetic (FM) and antiferromagnetic (AFM) orders can exist, adjacent to one another or concurrently, in the phase diagram of a single system. We show that universal quantum effects qualitatively alter the known phase diagrams for classical magnets. They shrink the region of concurrent FM and AFM order, change various transitions from second to first order, and, in the presence of a magnetic field, lead to either a quantum triple point where the FM, AFM, and paramagnetic phases all coexist or a quantum critical end point.

  9. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  10. ACS Zero Point Verification

    Science.gov (United States)

    Dolphin, Andrew

    2005-07-01

    The uncertainties in the photometric zero points create a fundamental limit to the accuracy of photometry. The current state of the ACS calibration is surprisingly poor, with zero point uncertainties of 0.03 magnitudes. The reason for this is that the ACS calibrations are based primarily on semi-emprical synthetic zero points and observations of fields too crowded for accurate ground-based photometry. I propose to remedy this problem by obtaining ACS images of the omega Cen standard field with all nine broadband ACS/WFC filters. This will permit the direct determination of the ACS zero points by comparison with excellent ground-based photometry, and should reduce their uncertainties to less than 0.01 magnitudes. A second benefit is that it will facilitate the comparison of the WFPC2 and ACS photometric systems, which will be important as WFPC2 is phased out and ACS becomes HST's primary imager. Finally, three of the filters will be repeated from my Cycle 12 observations, allowing for a measurement of any change in sensitivity.

  11. New England observed and predicted July stream/river temperature daily range points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted July stream/river temperature daily ranges in New England based on a spatial statistical network...

  12. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    Science.gov (United States)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  13. Fixed-Point Configurable Hardware Components

    Directory of Open Access Journals (Sweden)

    Rocher Romuald

    2006-01-01

    Full Text Available To reduce the gap between the VLSI technology capability and the designer productivity, design reuse based on IP (intellectual properties is commonly used. In terms of arithmetic accuracy, the generated architecture can generally only be configured through the input and output word lengths. In this paper, a new kind of method to optimize fixed-point arithmetic IP has been proposed. The architecture cost is minimized under accuracy constraints defined by the user. Our approach allows exploring the fixed-point search space and the algorithm-level search space to select the optimized structure and fixed-point specification. To significantly reduce the optimization and design times, analytical models are used for the fixed-point optimization process.

  14. Visualising Berry phase and diabolical points in a quantum exciton-polariton billiard.

    Science.gov (United States)

    Estrecho, E; Gao, T; Brodbeck, S; Kamp, M; Schneider, C; Höfling, S; Truscott, A G; Ostrovskaya, E A

    2016-11-25

    Diabolical points (spectral degeneracies) can naturally occur in spectra of two-dimensional quantum systems and classical wave resonators due to simple symmetries. Geometric Berry phase is associated with these spectral degeneracies. Here, we demonstrate a diabolical point and the corresponding Berry phase in the spectrum of hybrid light-matter quasiparticles-exciton-polaritons in semiconductor microcavities. It is well known that sufficiently strong optical pumping can drive exciton-polaritons to quantum degeneracy, whereby they form a macroscopically populated quantum coherent state similar to a Bose-Einstein condensate. By pumping a microcavity with a spatially structured light beam, we create a two-dimensional quantum billiard for the exciton-polariton condensate and demonstrate a diabolical point in the spectrum of the billiard eigenstates. The fully reconfigurable geometry of the potential walls controlled by the optical pump enables a striking experimental visualization of the Berry phase associated with the diabolical point. The Berry phase is observed and measured by direct imaging of the macroscopic exciton-polariton probability densities.

  15. Multivariate Product-Shot-noise Cox Point Process Models

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge

    We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....

  16. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  17. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  18. SU-F-I-54: Spatial Resolution Studies in Proton CT Using a Phase-II Prototype Head Scanner

    Energy Technology Data Exchange (ETDEWEB)

    Plautz, Tia E.; Johnson, R. P.; Sadrozinski, H. F.-W.; Zatserklyaniy, A. [University of California, Santa Cruz, Santa Cruz, CA (United States); Bashkirov, V.; Hurley, R. F.; Schulte, R. W. [Loma Linda University, Loma Linda, CA (United States); Piersimoni, P. [University of California, San Francisco, San Francisco, CA (United States); Giacometti, V. [University of Wollongong, Wollongong, NSW (Australia)

    2016-06-15

    Purpose: To characterize the modulation transfer function (MTF) of the pre-clinical (phase II) head scanner developed for proton computed tomography (pCT) by the pCT collaboration. To evaluate the spatial resolution achievable by this system. Methods: Our phase II proton CT scanner prototype consists of two silicon telescopes that track individual protons upstream and downstream from a phantom, and a 5-stage scintillation detector that measures a combination of the residual energy and range of the proton. Residual energy is converted to water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and associated paths of protons passing through the object over a 360° angular scan is processed by an iterative parallelizable reconstruction algorithm that runs on GP-GPU hardware. A custom edge phantom composed of water-equivalent polymer and tissue-equivalent material inserts was constructed. The phantom was first simulated in Geant4 and then built to perform experimental beam tests with 200 MeV protons at the Northwestern Medicine Chicago Proton Center. The oversampling method was used to construct radial and azimuthal edge spread functions and modulation transfer functions. The spatial resolution was defined by the 10% point of the modulation transfer function in units of lp/cm. Results: The spatial resolution of the image was found to be strongly correlated with the radial position of the insert but independent of the relative stopping power of the insert. The spatial resolution varies between roughly 4 and 6 lp/cm in both the the radial and azimuthal directions depending on the radial displacement of the edge. Conclusion: The amount of image degradation due to our detector system is small compared with the effects of multiple Coulomb scattering, pixelation of the image and the reconstruction algorithm. Improvements in reconstruction will be made in order to achieve the theoretical limits of spatial resolution.

  19. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...

  20. Electronic transport at semiconductor surfaces - from point-contact transistor to micro-four-point probes

    DEFF Research Database (Denmark)

    Hasegawa, S.; Grey, Francois

    2002-01-01

    show that this type of conduction is measurable using new types of experimental probes, such as the multi-tip scanning tunnelling microscope and the micro-four-point probe. The resulting electronic transport properties are intriguing, and suggest that semiconductor surfaces should be considered...