WorldWideScience

Sample records for point sources detected

  1. Induced Temporal Signatures for Point-Source Detection

    International Nuclear Information System (INIS)

    Stephens, Daniel L.; Runkle, Robert C.; Carlson, Deborah K.; Peurrung, Anthony J.; Seifert, Allen; Wyatt, Cory R.

    2005-01-01

    Detection of radioactive point-sized sources is inherently divided into two regimes encompassing stationary and moving detectors. The two cases differ in their treatment of background radiation and its influence on detection sensitivity. In the stationary detector case the statistical fluctuation of the background determines the minimum detectable quantity. In the moving detector case the detector may be subjected to widely and irregularly varying background radiation, as a result of geographical and environmental variation. This significant systematic variation, in conjunction with the statistical variation of the background, requires a conservative threshold to be selected to yield the same false-positive rate as the stationary detection case. This results in lost detection sensitivity for real sources. This work focuses on a simple and practical modification of the detector geometry that increase point-source recognition via a distinctive temporal signature. A key part of this effort is the integrated development of both detector geometries that induce a highly distinctive signature for point sources and the development of statistical algorithms able to optimize detection of this signature amidst varying background. The identification of temporal signatures for point sources has been demonstrated and compared with the canonical method showing good results. This work demonstrates that temporal signatures are efficient at increasing point-source discrimination in a moving detector system

  2. Research on point source simulating the γ-ray detection efficiencies of stander source

    International Nuclear Information System (INIS)

    Tian Zining; Jia Mingyan; Shen Maoquan; Yang Xiaoyan; Cheng Zhiwei

    2010-01-01

    For φ 75 mm x 25 mm sample, the full energy peak efficiencies on different heights of sample radius were obtained using the point sources, and the function parameters about the full energy peak efficiencies of point sources based on radius was fixed. The 59.54 keV γ-ray, 661.66 keV γ-ray, 1173.2 keV γ-ray, 1332.5 keV γ-ray detection efficiencies on different height of samples were obtained, based on the full energy peak efficiencies of point sources and its height, and the function parameters about the full energy peak efficiencies of surface sources based on sample height was fixed. The detection efficiency of (75 mm x 25 mm calibration source can be obtained by integrality, the detection efficiencies simulated by point sources are consistent with the results of stander source in 10%. Therefore, the calibration method of stander source can be substituted by the point source simulation method, and it tis feasible when there is no stander source.) (authors)

  3. The resolution of point sources of light as analyzed by quantum detection theory

    Science.gov (United States)

    Helstrom, C. W.

    1972-01-01

    The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  4. Resolution of point sources of light as analyzed by quantum detection theory.

    Science.gov (United States)

    Helstrom, C. W.

    1973-01-01

    The resolvability of point sources of incoherent thermal light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  5. Detection of Point Sources on Two-Dimensional Images Based on Peaks

    Directory of Open Access Journals (Sweden)

    R. B. Barreiro

    2005-09-01

    Full Text Available This paper considers the detection of point sources in two-dimensional astronomical images. The detection scheme we propose is based on peak statistics. We discuss the example of the detection of far galaxies in cosmic microwave background experiments throughout the paper, although the method we present is totally general and can be used in many other fields of data analysis. We consider sources with a Gaussian profile—that is, a fair approximation of the profile of a point source convolved with the detector beam in microwave experiments—on a background modeled by a homogeneous and isotropic Gaussian random field characterized by a scale-free power spectrum. Point sources are enhanced with respect to the background by means of linear filters. After filtering, we identify local maxima and apply our detection scheme, a Neyman-Pearson detector that defines our region of acceptance based on the a priori pdf of the sources and the ratio of number densities. We study the different performances of some linear filters that have been used in this context in the literature: the Mexican hat wavelet, the matched filter, and the scale-adaptive filter. We consider as well an extension to two dimensions of the biparametric scale-adaptive filter (BSAF. The BSAF depends on two parameters which are determined by maximizing the number density of real detections while fixing the number density of spurious detections. For our detection criterion the BSAF outperforms the other filters in the interesting case of white noise.

  6. Application of random-point processes to the detection of radiation sources

    International Nuclear Information System (INIS)

    Woods, J.W.

    1978-01-01

    In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori

  7. Astronomers Detect Powerful Bursting Radio Source Discovery Points to New Class of Astronomical Objects

    Science.gov (United States)

    2005-03-01

    Astronomers at Sweet Briar College and the Naval Research Laboratory (NRL) have detected a powerful new bursting radio source whose unique properties suggest the discovery of a new class of astronomical objects. The researchers have monitored the center of the Milky Way Galaxy for several years and reveal their findings in the March 3, 2005 edition of the journal, “Nature”. This radio image of the central region of the Milky Way Galaxy holds a new radio source, GCRT J1745-3009. The arrow points to an expanding ring of debris expelled by a supernova. CREDIT: N.E. Kassim et al., Naval Research Laboratory, NRAO/AUI/NSF Principal investigator, Dr. Scott Hyman, professor of physics at Sweet Briar College, said the discovery came after analyzing some additional observations from 2002 provided by researchers at Northwestern University. “"We hit the jackpot!” Hyman said referring to the observations. “An image of the Galactic center, made by collecting radio waves of about 1-meter in wavelength, revealed multiple bursts from the source during a seven-hour period from Sept. 30 to Oct. 1, 2002 — five bursts in fact, and repeating at remarkably constant intervals.” Hyman, four Sweet Briar students, and his NRL collaborators, Drs. Namir Kassim and Joseph Lazio, happened upon transient emission from two radio sources while studying the Galactic center in 1998. This prompted the team to propose an ongoing monitoring program using the National Science Foundation’s Very Large Array (VLA) radio telescope in New Mexico. The National Radio Astronomy Observatory, which operates the VLA, approved the program. The data collected, laid the groundwork for the detection of the new radio source. “Amazingly, even though the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths,” NRL astronomer Dr. Joseph Lazio pointed out, “very little has been done to look for radio bursts, which are often easier for astronomical objects to produce

  8. Point source detection using the Spherical Mexican Hat Wavelet on simulated all-sky Planck maps

    Science.gov (United States)

    Vielva, P.; Martínez-González, E.; Gallegos, J. E.; Toffolatti, L.; Sanz, J. L.

    2003-09-01

    We present an estimation of the point source (PS) catalogue that could be extracted from the forthcoming ESA Planck mission data. We have applied the Spherical Mexican Hat Wavelet (SMHW) to simulated all-sky maps that include cosmic microwave background (CMB), Galactic emission (thermal dust, free-free and synchrotron), thermal Sunyaev-Zel'dovich effect and PS emission, as well as instrumental white noise. This work is an extension of the one presented in Vielva et al. We have developed an algorithm focused on a fast local optimal scale determination, that is crucial to achieve a PS catalogue with a large number of detections and a low flux limit. An important effort has been also done to reduce the CPU time processor for spherical harmonic transformation, in order to perform the PS detection in a reasonable time. The presented algorithm is able to provide a PS catalogue above fluxes: 0.48 Jy (857 GHz), 0.49 Jy (545 GHz), 0.18 Jy (353 GHz), 0.12 Jy (217 GHz), 0.13 Jy (143 GHz), 0.16 Jy (100 GHz HFI), 0.19 Jy (100 GHz LFI), 0.24 Jy (70 GHz), 0.25 Jy (44 GHz) and 0.23 Jy (30 GHz). We detect around 27 700 PS at the highest frequency Planck channel and 2900 at the 30-GHz one. The completeness level are: 70 per cent (857 GHz), 75 per cent (545 GHz), 70 per cent (353 GHz), 80 per cent (217 GHz), 90 per cent (143 GHz), 85 per cent (100 GHz HFI), 80 per cent (100 GHz LFI), 80 per cent (70 GHz), 85 per cent (44 GHz) and 80 per cent (30 GHz). In addition, we can find several PS at different channels, allowing the study of the spectral behaviour and the physical processes acting on them. We also present the basic procedure to apply the method in maps convolved with asymmetric beams. The algorithm takes ~72 h for the most CPU time-demanding channel (857 GHz) in a Compaq HPC320 (Alpha EV68 1-GHz processor) and requires 4 GB of RAM memory; the CPU time goes as O[NRoN3/2pix log(Npix)], where Npix is the number of pixels in the map and NRo is the number of optimal scales needed.

  9. Photoacoustic Point Source

    International Nuclear Information System (INIS)

    Calasso, Irio G.; Craig, Walter; Diebold, Gerald J.

    2001-01-01

    We investigate the photoacoustic effect generated by heat deposition at a point in space in an inviscid fluid. Delta-function and long Gaussian optical pulses are used as sources in the wave equation for the displacement potential to determine the fluid motion. The linear sound-generation mechanism gives bipolar photoacoustic waves, whereas the nonlinear mechanism produces asymmetric tripolar waves. The salient features of the photoacoustic point source are that rapid heat deposition and nonlinear thermal expansion dominate the production of ultrasound

  10. Point Pollution Sources Dimensioning

    Directory of Open Access Journals (Sweden)

    Georgeta CUCULEANU

    2011-06-01

    Full Text Available In this paper a method for determining the main physical characteristics of the point pollution sources is presented. It can be used to find the main physical characteristics of them. The main physical characteristics of these sources are top inside source diameter and physical height. The top inside source diameter is calculated from gas flow-rate. For reckoning the physical height of the source one takes into account the relation given by the proportionality factor, defined as ratio between the plume rise and physical height of the source. The plume rise depends on the gas exit velocity and gas temperature. That relation is necessary for diminishing the environmental pollution when the production capacity of the plant varies, in comparison with the nominal one.

  11. Improved Point-source Detection in Crowded Fields Using Probabilistic Cataloging

    Science.gov (United States)

    Portillo, Stephen K. N.; Lee, Benjamin C. G.; Daylan, Tansu; Finkbeiner, Douglas P.

    2017-10-01

    Cataloging is challenging in crowded fields because sources are extremely covariant with their neighbors and blending makes even the number of sources ambiguous. We present the first optical probabilistic catalog, cataloging a crowded (˜0.1 sources per pixel brighter than 22nd mag in F606W) Sloan Digital Sky Survey r-band image from M2. Probabilistic cataloging returns an ensemble of catalogs inferred from the image and thus can capture source-source covariance and deblending ambiguities. By comparing to a traditional catalog of the same image and a Hubble Space Telescope catalog of the same region, we show that our catalog ensemble better recovers sources from the image. It goes more than a magnitude deeper than the traditional catalog while having a lower false-discovery rate brighter than 20th mag. We also present an algorithm for reducing this catalog ensemble to a condensed catalog that is similar to a traditional catalog, except that it explicitly marginalizes over source-source covariances and nuisance parameters. We show that this condensed catalog has a similar completeness and false-discovery rate to the catalog ensemble. Future telescopes will be more sensitive, and thus more of their images will be crowded. Probabilistic cataloging performs better than existing software in crowded fields and so should be considered when creating photometric pipelines in the Large Synoptic Survey Telescope era.

  12. Archival Legacy Investigations of Circumstellar Environments (ALICE): Statistical assessment of point source detections

    Science.gov (United States)

    Choquet, Élodie; Pueyo, Laurent; Soummer, Rémi; Perrin, Marshall D.; Hagan, J. Brendan; Gofas-Salas, Elena; Rajan, Abhijith; Aguilar, Jonathan

    2015-09-01

    The ALICE program, for Archival Legacy Investigation of Circumstellar Environment, is currently conducting a virtual survey of about 400 stars, by re-analyzing the HST-NICMOS coronagraphic archive with advanced post-processing techniques. We present here the strategy that we adopted to identify detections and potential candidates for follow-up observations, and we give a preliminary overview of our detections. We present a statistical analysis conducted to evaluate the confidence level on these detection and the completeness of our candidate search.

  13. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    Science.gov (United States)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH

  14. Source splitting via the point source method

    International Nuclear Information System (INIS)

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  15. Ghost imaging with bucket detection and point detection

    Science.gov (United States)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  16. Point-source inversion techniques

    Science.gov (United States)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  17. Dissolved organic matter fluorescence at wavelength 275/342 nm as a key indicator for detection of point-source contamination in a large Chinese drinking water lake.

    Science.gov (United States)

    Zhou, Yongqiang; Jeppesen, Erik; Zhang, Yunlin; Shi, Kun; Liu, Xiaohan; Zhu, Guangwei

    2016-02-01

    Surface drinking water sources have been threatened globally and there have been few attempts to detect point-source contamination in these waters using chromophoric dissolved organic matter (CDOM) fluorescence. To determine the optimal wavelength derived from CDOM fluorescence as an indicator of point-source contamination in drinking waters, a combination of field campaigns in Lake Qiandao and a laboratory wastewater addition experiment was used. Parallel factor (PARAFAC) analysis identified six components, including three humic-like, two tryptophan-like, and one tyrosine-like component. All metrics showed strong correlation with wastewater addition (r(2) > 0.90, p CDOM fluorescence at 275/342 nm was the most responsive wavelength to the point-source contamination in the lake. Our results suggest that pollutants in Lake Qiandao had the highest concentrations in the river mouths of upstream inflow tributaries and the single wavelength at 275/342 nm may be adapted for online or in situ fluorescence measurements as an early warning of contamination events. This study demonstrates the potential utility of CDOM fluorescence to monitor water quality in surface drinking water sources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Detecting determinism from point processes.

    Science.gov (United States)

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  19. Calcareous Fens - Source Feature Points

    Data.gov (United States)

    Minnesota Department of Natural Resources — Pursuant to the provisions of Minnesota Statutes, section 103G.223, this database contains points that represent calcareous fens as defined in Minnesota Rules, part...

  20. Unidentified point sources in the IRAS minisurvey

    Science.gov (United States)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  1. Moving Sources Detection System

    International Nuclear Information System (INIS)

    Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Normand, Stephane

    2013-06-01

    To monitor radioactivity passing through a pipe or in a given container such as a train or a truck, radiation detection systems are commonly employed. These detectors could be used in a network set along the source track to increase the overall detection efficiency. However detection methods are based on counting statistics analysis. The method usually implemented consists in trigging an alarm when an individual signal rises over a threshold initially estimated in regards to the natural background signal. The detection efficiency is then proportional to the number of detectors in use, due to the fact that each sensor is taken as a standalone sensor. A new approach is presented in this paper taking into account the temporal periodicity of the signals taken by all distributed sensors as a whole. This detection method is not based only on counting statistics but also on the temporal series analysis aspect. Therefore, a specific algorithm is then developed in our lab for this kind of applications and shows a significant improvement, especially in terms of detection efficiency and false alarms reduction. We also plan on extracting information from the source vector. This paper presents the theoretical approach and some preliminary results obtain in our laboratory. (authors)

  2. UHE point source survey at Cygnus experiment

    International Nuclear Information System (INIS)

    Lu, X.; Yodh, G.B.; Alexandreas, D.E.; Allen, R.C.; Berley, D.; Biller, S.D.; Burman, R.L.; Cady, R.; Chang, C.Y.; Dingus, B.L.; Dion, G.M.; Ellsworth, R.W.; Gilra, M.K.; Goodman, J.A.; Haines, T.J.; Hoffman, C.M.; Kwok, P.; Lloyd-Evans, J.; Nagle, D.E.; Potter, M.E.; Sandberg, V.D.; Stark, M.J.; Talaga, R.L.; Vishwanath, P.R.; Zhang, W.

    1991-01-01

    A new method of searching for UHE point source has been developed. With a data sample of 150 million events, we have surveyed the sky for point sources over 3314 locations (1.4 degree <δ<70.4 degree). It was found that their distribution is consistent with a random fluctuation. In addition, fifty two known potential sources, including pulsars and binary x-ray sources, were studied. The source with the largest positive excess is the Crab Nebula. An excess of 2.5 sigma above the background is observed in a bin of 2.3 degree by 2.5 degree in declination and right ascension respectively

  3. Γ-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    Transformer based Z-source inverters are recently proposed to achieve promising buck-boost capability. They have improved higher buck-boost capability, smaller size and less components count over Z-source inverters. On the other hand, neutral point clamped inverters have less switching stress...... and better output performance comparing with traditional two-level inverters. Integrating these two types of configurations can help neutral point inverters achieve enhanced votlage buck-boost capability....

  4. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  5. OH masers associated with IRAS point sources

    NARCIS (Netherlands)

    Masheder, MRW; Cohen, RJ; Martin-Hernandez, NL; Migenes,; Reid, MJ

    2002-01-01

    We report a search for masers from the Lambda-doublet of the ground-state of OH at 18cm, carried out with the Jodrell Bank Lovell Telescope and with the 25m Dwingeloo telescope. All objects north of delta = -20degrees which appear in the IRAS Point Source Catalog with fluxes > 1000 Jy at 60mum and

  6. Isotropic irradiation of detectors from point sources

    DEFF Research Database (Denmark)

    Aage, Helle Karina

    1997-01-01

    NaI(Tl) scintillator detectors have been exposed to gamma rays from 8 different point sources from different directions. Background and backscatter of gamma-rays from the surroundings have been subtracted in order to produce clean spectra. By adding spectra obtained from exposures from different ...

  7. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  8. Multi-lane detection based on multiple vanishing points detection

    Science.gov (United States)

    Li, Chuanxiang; Nie, Yiming; Dai, Bin; Wu, Tao

    2015-03-01

    Lane detection plays a significant role in Advanced Driver Assistance Systems (ADAS) for intelligent vehicles. In this paper we present a multi-lane detection method based on multiple vanishing points detection. A new multi-lane model assumes that a single lane, which has two approximately parallel boundaries, may not parallel to others on road plane. Non-parallel lanes associate with different vanishing points. A biological plausibility model is used to detect multiple vanishing points and fit lane model. Experimental results show that the proposed method can detect both parallel lanes and non-parallel lanes.

  9. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  10. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  11. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  12. Interest point detection for hyperspectral imagery

    Science.gov (United States)

    Dorado-Muñoz, Leidy P.; Vélez-Reyes, Miguel; Roysam, Badrinath; Mukherjee, Amit

    2009-05-01

    This paper presents an algorithm for automated extraction of interest points (IPs)in multispectral and hyperspectral images. Interest points are features of the image that capture information from its neighbours and they are distinctive and stable under transformations such as translation and rotation. Interest-point operators for monochromatic images were proposed more than a decade ago and have since been studied extensively. IPs have been applied to diverse problems in computer vision, including image matching, recognition, registration, 3D reconstruction, change detection, and content-based image retrieval. Interest points are helpful in data reduction, and reduce the computational burden of various algorithms (like registration, object detection, 3D reconstruction etc) by replacing an exhaustive search over the entire image domain by a probe into a concise set of highly informative points. An interest operator seeks out points in an image that are structurally distinct, invariant to imaging conditions, stable under geometric transformation, and interpretable which are good candidates for interest points. Our approach extends ideas from Lowe's keypoint operator that uses local extrema of Difference of Gaussian (DoG) operator at multiple scales to detect interest point in gray level images. The proposed approach extends Lowe's method by direct conversion of scalar operations such as scale-space generation, and extreme point detection into operations that take the vector nature of the image into consideration. Experimental results with RGB and hyperspectral images which demonstrate the potential of the method for this application and the potential improvements of a fully vectorial approach over band-by-band approaches described in the literature.

  13. Search for high energy cosmic neutrino point sources with ANTARES

    International Nuclear Information System (INIS)

    Halladjian, G.

    2010-01-01

    The aim of this thesis is the search for high energy cosmic neutrinos emitted by point sources with the ANTARES neutrino telescope. The detection of high energy cosmic neutrinos can bring answers to important questions such as the origin of cosmic rays and the γ-rays emission processes. In the first part of the thesis, the neutrino flux emitted by galactic and extragalactic sources and the number of events which can be detected by ANTARES are estimated. This study uses the measured γ-ray spectra of known sources taking into account the γ-ray absorption by the extragalactic background light. In the second part of the thesis, the absolute pointing of the ANTARES telescope is evaluated. Being located at a depth of 2475 m in sea water, the orientation of the detector is determined by an acoustic positioning system which relies on low and high frequency acoustic waves measurements between the sea surface and the bottom. The third part of the thesis is a search for neutrino point sources in the ANTARES data. The search algorithm is based on a likelihood ratio maximization method. It is used in two search strategies; 'the candidate sources list strategy' and 'the all sky search strategy'. Analysing 2007+2008 data, no discovery is made and the world's best upper limits on neutrino fluxes from various sources in the Southern sky are established. (author)

  14. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change-points in the tail of multiple time series data. In addition, we adapt existing tail change-point detection methods to our specific problem and conduct a thorough comparison of different methods in terms of performance on the estimation of change-points and computational time. We also examine three locations on the U.S. northeast coast and demonstrate that the methods are useful for identifying changes in seasonally extreme warm temperatures.

  15. The Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael; Rots, Arnold; Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Danny G. Gibbs, II; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory are used to generate one of the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  16. Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael L.; Rots, A. H.; Primini, F. A.; Evans, I. N.; Glotfelty, K. J.; Hain, R.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory will used to generate the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  17. Detecting corner points from digital curves

    International Nuclear Information System (INIS)

    Sarfraz, M.

    2011-01-01

    Corners in digital images give important clues for shape representation, recognition, and analysis. Since dominant information regarding shape is usually available at the corners, they provide important features for various real life applications in the disciplines like computer vision, pattern recognition, computer graphics. Corners are the robust features in the sense that they provide important information regarding objects under translation, rotation and scale change. They are also important from the view point of understanding human perception of objects. They play crucial role in decomposing or describing the digital curves. They are also used in scale space theory, image representation, stereo vision, motion tracking, image matching, building mosaics and font designing systems. If the corner points are identified properly, a shape can be represented in an efficient and compact way with sufficient accuracy. Corner detection schemes, based on their applications, can be broadly divided into two categories: binary (suitable for binary images) and gray level (suitable for gray level images). Corner detection approaches for binary images usually involve segmenting the image into regions and extracting boundaries from those regions that contain them. The techniques for gray level images can be categorized into two classes: (a) Template based and (b) gradient based. The template based techniques utilize correlation between a sub-image and a template of a given angle. A corner point is selected by finding the maximum of the correlation output. Gradient based techniques require computing curvature of an edge that passes through a neighborhood in a gray level image. Many corner detection algorithms have been proposed in the literature which can be broadly divided into two parts. One is to detect corner points from grayscale images and other relates to boundary based corner detection. This contribution mainly deals with techniques adopted for later approach

  18. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.; Sun, Ying; Wang, Huixia Judy

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change

  19. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  20. Detecting quantum critical points using bipartite fluctuations.

    Science.gov (United States)

    Rachel, Stephan; Laflorencie, Nicolas; Song, H Francis; Le Hur, Karyn

    2012-03-16

    We show that the concept of bipartite fluctuations F provides a very efficient tool to detect quantum phase transitions in strongly correlated systems. Using state-of-the-art numerical techniques complemented with analytical arguments, we investigate paradigmatic examples for both quantum spins and bosons. As compared to the von Neumann entanglement entropy, we observe that F allows us to find quantum critical points with much better accuracy in one dimension. We further demonstrate that F can be successfully applied to the detection of quantum criticality in higher dimensions with no prior knowledge of the universality class of the transition. Promising approaches to experimentally access fluctuations are discussed for quantum antiferromagnets and cold gases.

  1. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  2. Dim point target detection against bright background

    Science.gov (United States)

    Zhang, Yao; Zhang, Qiheng; Xu, Zhiyong; Xu, Junping

    2010-05-01

    For target detection within a large-field cluttered background from a long distance, several difficulties, involving low contrast between target and background, little occupancy, illumination ununiformity caused by vignetting of lens, and system noise, make it a challenging problem. The existing approaches to dim target detection can be roughly divided into two categories: detection before tracking (DBT) and tracking before detection (TBD). The DBT-based scheme has been widely used in practical applications due to its simplicity, but it often requires working in the situation with a higher signal-to-noise ratio (SNR). In contrast, the TBD-based methods can provide impressive detection results even in the cases of very low SNR; unfortunately, the large memory requirement and high computational load prevents these methods from real-time tasks. In this paper, we propose a new method for dim target detection. We address this problem by combining the advantages of the DBT-based scheme in computational efficiency and of the TBD-based in detection capability. Our method first predicts the local background, and then employs the energy accumulation and median filter to remove background clutter. The dim target is finally located by double window filtering together with an improved high order correlation which speeds up the convergence. The proposed method is implemented on a hardware platform and performs suitably in outside experiments.

  3. Assessment of the impact of point source pollution from the ...

    African Journals Online (AJOL)

    Assessment of the impact of point source pollution from the Keiskammahoek Sewage ... Water SA. Journal Home · ABOUT THIS JOURNAL · Advanced Search ... Also, significant pollution of the receiving Keiskamma River was indicated for ...

  4. Effect of point source and heterogeneity on the propagation of ...

    African Journals Online (AJOL)

    user

    propagation of Love waves due to point source in a homogeneous layer overlying a ...... The dispersion equation of SH waves will be obtained by equating to zero the ..... He was Awarded Atomic Energy Fellowship by the Government of India.

  5. Nearest Neighbour Corner Points Matching Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Changlong

    2015-01-01

    Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.

  6. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  7. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    Energy Technology Data Exchange (ETDEWEB)

    Sturgeon, Richard W. [Los Alamos National Laboratory

    2012-06-27

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources. This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are

  8. Concept for Risk-based Prioritisation of Point Sources

    DEFF Research Database (Denmark)

    Overheu, N.D.; Troldborg, Mads; Tuxen, N.

    2010-01-01

    estimates on a local scale from all the sources, and 3D catchment-scale fate and transport modelling. It handles point sources at various knowledge levels and accounts for uncertainties. The tool estimates the impacts on the water supply in the catchment and provides an overall prioritisation of the sites...

  9. X pinch a point x-ray source

    International Nuclear Information System (INIS)

    Garg, A.B.; Rout, R.K.; Shyam, A.; Srinivasan, M.

    1993-01-01

    X ray emission from an X pinch, a point x-ray source has been studied using a pin-hole camera by a 30 kV, 7.2 μ F capacitor bank. The wires of different material like W, Mo, Cu, S.S.(stainless steel) and Ti were used. Molybdenum pinch gives the most intense x-rays and stainless steel gives the minimum intensity x-rays for same bank energy (∼ 3.2 kJ). Point x-ray source of size (≤ 0.5 mm) was observed using pin hole camera. The size of the source is limited by the size of the pin hole camera. The peak current in the load is approximately 150 kA. The point x-ray source could be useful in many fields like micro lithography, medicine and to study the basic physics of high Z plasmas. (author). 4 refs., 3 figs

  10. Very Luminous X-ray Point Sources in Starburst Galaxies

    Science.gov (United States)

    Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.

    Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.

  11. An algorithm for leak point detection of underground pipelines

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2004-01-01

    Leak noise is a good source to identify the exact location of a leak point of underground water pipelines. Water leak generates broadband noise from a leak location and can be propagated to both directions of water pipes. However, the necessity of long-range detection of this leak location makes to identify low-frequency acoustic waves rather than high frequency ones. Acoustic wave propagation coupled with surrounding boundaries including cast iron pipes is theoretically analyzed and the wave velocity was confirmed with experiment. The leak locations were identified both by the acoustic emission (AE) method and the cross-correlation method. In a short-range distance, both the AE method and cross-correlation method are effective to detect leak position. However, the detection for a long-range distance required a lower frequency range accelerometers only because higher frequency waves were attenuated very quickly with the increase of propagation paths. Two algorithms for the cross-correlation function were suggested, and a long-range detection has been achieved at real underground water pipelines longer than 300 m.

  12. Trans-Z-source Neutral Point Clamped inverter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P. C.; Li, D.

    2012-01-01

    Transformer based Z-source (trans-Z-source) inverters are recently proposed by extending the traditional Z-source inverter with higher buck-boost capability as well as reducing the passive components at the same time. Multi-Level Z-source inverters are single-stage topological solutions used...... for buck-boost energy conversion with all the favourable advantages of multi-level switching retained. This paper presents three-level trans-Z-source Neutral Point Clamped (NPC) inverter topology, which achieves both the advantages of trans-Z-source and three-level NPC inverter configuration. With proper...... modulation scheme, the three-level trans-Z-source inverter can function with minimum of six device commutations per half carrier cycle (same as the traditional buck NPC inverter), while maintaining to produce the designed volt-sec average and inductive voltage boosting at ac output terminals. The designed...

  13. Radio identifications of IRAS point sources with b greater than 30 deg

    International Nuclear Information System (INIS)

    Condon, J.J.; Broderick, J.J.; Virginia Polytechnic Institute and State Univ., Blacksburg)

    1986-01-01

    The present radio identifications of IRAS point sources on the basis of Green Bank 1400 MHz survey maps notes that 365 hot IR sources are not detectable radio sources, and that nearly all cool high latitude IRAS sources are extragalactic. The fainter IR-source identifications encompass optically bright quasars, BL Lac objects, Seyfert galaxies, and elliptical galaxies. No IRAS sources could be identified with distant elliptical radio galaxies, so that although the radio and IR fluxes of most IRAS extragalactic sources are tightly correlated, complete samples of strong radio and IR sources are almost completely disjoint; no more than 1 percent of the IR sources are radio sources and less than 1 percent of the radio sources are IR ones. 35 references

  14. On the point-source approximation of earthquake dynamics

    Directory of Open Access Journals (Sweden)

    Andrea Bizzarri

    2014-06-01

    Full Text Available The focus on the present study is on the point-source approximation of a seismic source. First, we compare the synthetic motions on the free surface resulting from different analytical evolutions of the seismic source (the Gabor signal (G, the Bouchon ramp (B, the Cotton and Campillo ramp (CC, the Yoffe function (Y and the Liu and Archuleta function (LA. Our numerical experiments indicate that the CC and the Y functions produce synthetics with larger oscillations and correspondingly they have a higher frequency content. Moreover, the CC and the Y functions tend to produce higher peaks in the ground velocity (roughly of a factor of two. We have also found that the falloff at high frequencies is quite different: it roughly follows ω−2 in the case of G and LA functions, it decays more faster than ω−2 for the B function, while it is slow than ω−1 for both the CC and the Y solutions. Then we perform a comparison of seismic waves resulting from 3-D extended ruptures (both supershear and subshear obeying to different governing laws against those from a single point-source having the same features. It is shown that the point-source models tend to overestimate the ground motions and that they completely miss the Mach fronts emerging from the supershear transition process. When we compare the extended fault solutions against a multiple point-sources model the agreement becomes more significant, although relevant discrepancies still persist. Our results confirm that, and more importantly quantify how, the point-source approximation is unable to adequately describe the radiation emitted during a real world earthquake, even in the most idealized case of planar fault with homogeneous properties and embedded in a homogeneous, perfectly elastic medium.

  15. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman

    2016-08-09

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  16. Localization of Point Sources for Poisson Equation using State Observers

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2016-01-01

    A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

  17. Clinical Validation of Point-Source Corneal Topography in Keratoplasty

    NARCIS (Netherlands)

    Vrijling, A C L; Braaf, B.; Snellenburg, J.J.; de Lange, F.; Zaal, M.J.W.; van der Heijde, G.L.; Sicam, V.A.D.P.

    2011-01-01

    Purpose. To validate the clinical performance of point-source corneal topography (PCT) in postpenetrating keratoplasty (PKP) eyes and to compare it with conventional Placido-based topography. Methods. Corneal elevation maps of the anterior corneal surface were obtained from 20 post-PKP corneas using

  18. Identifying populations at risk from environmental contamination from point sources

    OpenAIRE

    Williams, F; Ogston, S

    2002-01-01

    Objectives: To compare methods for defining the population at risk from a point source of air pollution. A major challenge for environmental epidemiology lies in correctly identifying populations at risk from exposure to environmental pollutants. The complexity of today's environment makes it essential that the methods chosen are accurate and sensitive.

  19. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  20. Fast Change Point Detection for Electricity Market Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Berkeley, UC; Gu, William; Choi, Jaesik; Gu, Ming; Simon, Horst; Wu, Kesheng

    2013-08-25

    Electricity is a vital part of our daily life; therefore it is important to avoid irregularities such as the California Electricity Crisis of 2000 and 2001. In this work, we seek to predict anomalies using advanced machine learning algorithms. These algorithms are effective, but computationally expensive, especially if we plan to apply them on hourly electricity market data covering a number of years. To address this challenge, we significantly accelerate the computation of the Gaussian Process (GP) for time series data. In the context of a Change Point Detection (CPD) algorithm, we reduce its computational complexity from O($n^{5}$) to O($n^{2}$). Our efficient algorithm makes it possible to compute the Change Points using the hourly price data from the California Electricity Crisis. By comparing the detected Change Points with known events, we show that the Change Point Detection algorithm is indeed effective in detecting signals preceding major events.

  1. Change detection in polarimetric SAR data over several time points

    DEFF Research Database (Denmark)

    Conradsen, Knut; Nielsen, Allan Aasbjerg; Skriver, Henning

    2014-01-01

    A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points.......A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points....

  2. Point source identification in nonlinear advection–diffusion–reaction systems

    International Nuclear Information System (INIS)

    Mamonov, A V; Tsai, Y-H R

    2013-01-01

    We consider a problem of identification of point sources in time-dependent advection–diffusion systems with a nonlinear reaction term. The linear counterpart of the problem in question can be reduced to solving a system of nonlinear algebraic equations via the use of adjoint equations. We extend this approach by constructing an algorithm that solves the problem iteratively to account for the nonlinearity of the reaction term. We study the question of improving the quality of source identification by adding more measurements adaptively using the solution obtained previously with a smaller number of measurements. (paper)

  3. Point sources and multipoles in inverse scattering theory

    CERN Document Server

    Potthast, Roland

    2001-01-01

    Over the last twenty years, the growing availability of computing power has had an enormous impact on the classical fields of direct and inverse scattering. The study of inverse scattering, in particular, has developed rapidly with the ability to perform computational simulations of scattering processes and led to remarkable advances in a range of applications, from medical imaging and radar to remote sensing and seismic exploration. Point Sources and Multipoles in Inverse Scattering Theory provides a survey of recent developments in inverse acoustic and electromagnetic scattering theory. Focusing on methods developed over the last six years by Colton, Kirsch, and the author, this treatment uses point sources combined with several far-reaching techniques to obtain qualitative reconstruction methods. The author addresses questions of uniqueness, stability, and reconstructions for both two-and three-dimensional problems.With interest in extracting information about an object through scattered waves at an all-ti...

  4. Reduction Assessment of Agricultural Non-Point Source Pollutant Loading

    OpenAIRE

    Fu, YiCheng; Zang, Wenbin; Zhang, Jian; Wang, Hongtao; Zhang, Chunling; Shi, Wanli

    2018-01-01

    NPS (Non-point source) pollution has become a key impact element to watershed environment at present. With the development of technology, application of models to control NPS pollution has become a very common practice for resource management and Pollutant reduction control in the watershed scale of China. The SWAT (Soil and Water Assessment Tool) model is a semi-conceptual model, which was put forward to estimate pollutant production & the influences on water quantity-quality under different...

  5. Diffusion from a point source in an urban atmosphere

    International Nuclear Information System (INIS)

    Essa, K.S.M.; El-Otaify, M.S.

    2005-01-01

    In the present paper, a model for the diffusion of material from a point source in an urban atmosphere is incorporated. The plume is assumed to have a well-defined edge at which the concentration falls to zero. The vertical wind shear is estimated using logarithmic law, by employing most of the available techniques of stability categories. The concentrations estimated from the model were compared favorably with the field observations of other investigators

  6. Is a wind turbine a point source? (L).

    Science.gov (United States)

    Makarewicz, Rufin

    2011-02-01

    Measurements show that practically all noise of wind turbine noise is produced by turbine blades, sometimes a few tens of meters long, despite that the model of a point source located at the hub height is commonly used. The plane of rotating blades is the critical location of the receiver because the distances to the blades are the shortest. It is shown that such location requires certain condition to be met. The model is valid far away from the wind turbine as well.

  7. Point Count Length and Detection of Forest Neotropical Migrant Birds

    Science.gov (United States)

    Deanna K. Dawson; David R. Smith; Chandler S. Robbins

    1995-01-01

    Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences...

  8. A search for hot post-AGE stars in the IRAS Point Source Catalog

    NARCIS (Netherlands)

    Oudmaijer, RD

    In this paper a first step is made to search for hot post-AGB stars in the IRAS Point Source Catalog. In order to find objects that evolved off the AGE a longer time ago than post-AGB objects discussed in the literature, objects that were not detected at 12 mu m by IRAS were selected. The selection

  9. Pulsewidth-modulated 2-source neutral-point-clamped inverter

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, Poh Chang; Gao, Feng

    2007-01-01

    This paper presents the careful integration of a newly proposed Z-source topological concept to the basic neutral-point-clamped (NPC) inverter topology for designing a three-level inverter with both voltage-buck and voltage-boost capabilities. The designed Z-source NPC inverter uses two unique X......-shaped inductance-capacitance (LC) impedance networks that are connected between two isolated dc input power sources and its inverter circuitry for boosting its AC output voltage. Through the design of an appropriate pulsewidth-modulation (PWM) algorithm, the two impedance networks can be short......-circuited sequentially (without shooting through the inverter full DC link) for implementing the ldquonearest-three-vectorrdquo modulation principle with minimized harmonic distortion and device commutations per half carrier cycle while performing voltage boosting. With only a slight modification to the inverter PWM...

  10. Scattering and absorption of particles emitted by a point source in a cluster of point scatterers

    International Nuclear Information System (INIS)

    Liljequist, D.

    2012-01-01

    A theory for the scattering and absorption of particles isotropically emitted by a point source in a cluster of point scatterers is described and related to the theory for the scattering of an incident particle beam. The quantum mechanical probability of escape from the cluster in different directions is calculated, as well as the spatial distribution of absorption events within the cluster. A source strength renormalization procedure is required. The average quantum scattering in clusters with randomly shifting scatterer positions is compared to trajectory simulation with the aim of studying the validity of the trajectory method. Differences between the results of the quantum and trajectory methods are found primarily for wavelengths larger than the average distance between nearest neighbour scatterers. The average quantum results include, for example, a local minimum in the number of absorption events at the location of the point source and interference patterns in the angle-dependent escape probability as well as in the distribution of absorption events. The relative error of the trajectory method is in general, though not generally, of similar magnitude as that obtained for beam scattering.

  11. Open-Source Automated Mapping Four-Point Probe

    Directory of Open Access Journals (Sweden)

    Handy Chandra

    2017-01-01

    Full Text Available Scientists have begun using self-replicating rapid prototyper (RepRap 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  12. Open-Source Automated Mapping Four-Point Probe.

    Science.gov (United States)

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  13. Detecting hidden sources-STUK/HUT team

    Energy Technology Data Exchange (ETDEWEB)

    Nikkinen, M.; Aarnio, P. [Helsinki Univ. of Technology, Espoo (Finland); Honkamaa, T.; Tiilikainen, H. [Finnish Centre for Radiation and Nuclear Safety, Helsinki (Finland)

    1997-12-31

    The task of the team was to locate and to identify hidden sources in a specified area in Padasjoki Auttoinen village. The team used AB-420 helicopter of the Finnish Frontier Guard. The team had two measuring systems: HPGe system (relative efficiency 18%) and 5`x5` NaI system. The team found two sources in real-time and additional two sources after 24 h analysis time. After the locations and characteristics of the sources were announced it was found out that altogether six sources would have been possible to find using the measured data. The total number of sources was ten. The NaI detector was good at detecting and locating the sources and HPGe was most useful in identification and calculation of the activity estimates. The following development should be made: 1) larger detectors are needed, 2) the software has to be improved. (This has been performed after the exercise) and 3) the navigation must be based on DGPS. visual navigation causes easily gaps between the flight lines and some sources may not be detected. (au).

  14. Detecting hidden sources-STUK/HUT team

    Energy Technology Data Exchange (ETDEWEB)

    Nikkinen, M; Aarnio, P [Helsinki Univ. of Technology, Espoo (Finland); Honkamaa, T; Tiilikainen, H [Finnish Centre for Radiation and Nuclear Safety, Helsinki (Finland)

    1998-12-31

    The task of the team was to locate and to identify hidden sources in a specified area in Padasjoki Auttoinen village. The team used AB-420 helicopter of the Finnish Frontier Guard. The team had two measuring systems: HPGe system (relative efficiency 18%) and 5`x5` NaI system. The team found two sources in real-time and additional two sources after 24 h analysis time. After the locations and characteristics of the sources were announced it was found out that altogether six sources would have been possible to find using the measured data. The total number of sources was ten. The NaI detector was good at detecting and locating the sources and HPGe was most useful in identification and calculation of the activity estimates. The following development should be made: 1) larger detectors are needed, 2) the software has to be improved. (This has been performed after the exercise) and 3) the navigation must be based on DGPS. visual navigation causes easily gaps between the flight lines and some sources may not be detected. (au).

  15. Preparation of very small point sources for high resolution radiography

    International Nuclear Information System (INIS)

    Case, F.N.

    1976-01-01

    The need for very small point sources of high specific activity 192 Ir, 169 Yb, 170 Tm, and 60 Co in non-destructive testing has motivated the development of techniques for the fabrication of these sources. To prepare 192 Ir point sources for use in examination of tube sheet welds in LMFBR heat exchangers, 191 Ir enriched to greater than 90 percent was melted in a helium blanketed arc to form spheres as small as 0.38 mm in diameter. Methods were developed to form the roughly spherical shaped arc product into nearly symmetrical spheres that could be used for high resolution radiography. Similar methods were used for spherical shaped sources of 169 Yb and 170 Tm. The oxides were arc melted to form rough spheres followed by grinding to precise dimensions, neutron irradiation of the spheres at a flux of 2 to 3 x 10 15 nv, and use of enriched 168 Yb to provide the maximum specific activity. Cobalt-60 with a specific activity of greater than 1100 Ci/g was prepared by processing 59 Co that had been neutron irradiated to nearly complete burnup of the 59 Co target to produce 60 Co, 61 Ni, and 62 Ni. Ion exchange methods were used to separate the cobalt from the nickel. The cobalt was reduced to metal by plating either onto aluminum foil which was dissolved away from the cobalt plate, or by plating onto mercury to prepare amalgam that could be easily formed into a pellet of cobalt with exclusion of the mercury. Both methods are discussed

  16. Detection of OH radicals from IRAS sources

    International Nuclear Information System (INIS)

    Lewis, B.M.; Eder, J.; Terzian, Y.

    1985-01-01

    An efficient method for detecting new OH/infrared stars is to begin with IRAS source positions, selected for appropriate infrared colours, and using radio-line observations to confirm the OH properties. The authors demonstrate the validity of this approach here, using the Arecibo 305 m radio-telescope to confirm the 1,612 MHz line observations of sources in IRAS Circulars 8 and 9; the present observations identify 21 new OH/infrared stars. The new sources have weaker 1,612 MHz fluxes, bluer (60-25) μm colours and a smaller mean separation between the principal emission peaks than previous samples. (author)

  17. Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed

    OpenAIRE

    Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji

    2004-01-01

    The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...

  18. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  19. Jump point detection for real estate investment success

    Science.gov (United States)

    Hui, Eddie C. M.; Yu, Carisa K. W.; Ip, Wai-Cheung

    2010-03-01

    In the literature, studies on real estate market were mainly concentrating on the relation between property price and some key factors. The trend of the real estate market is a major concern. It is believed that changes in trend are signified by some jump points in the property price series. Identifying such jump points reveals important findings that enable policy-makers to look forward. However, not all jump points are observable from the plot of the series. This paper looks into the trend and introduces a new approach to the framework for real estate investment success. The main purpose of this paper is to detect jump points in the time series of some housing price indices and stock price index in Hong Kong by applying the wavelet analysis. The detected jump points reflect to some significant political issues and economic collapse. Moreover, the relations among properties of different classes and between stocks and properties are examined. It can be shown from the empirical result that a lead-lag effect happened between the prices of large-size property and those of small/medium-size property. However, there is no apparent relation or consistent lead in terms of change point measure between property price and stock price. This may be due to the fact that globalization effect has more impact on the stock price than the property price.

  20. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  1. VEHICLE LOCALIZATION BY LIDAR POINT CORRELATION IMPROVED BY CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Schlichting

    2016-06-01

    Full Text Available LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position and 0.06° (heading, and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  2. Vehicle Localization by LIDAR Point Correlation Improved by Change Detection

    Science.gov (United States)

    Schlichting, A.; Brenner, C.

    2016-06-01

    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  3. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  4. Discretized energy minimization in a wave guide with point sources

    Science.gov (United States)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  5. Laser desorption mass spectrometry for point mutation detection

    Energy Technology Data Exchange (ETDEWEB)

    Taranenko, N.I.; Chung, C.N.; Zhu, Y.F. [Oak Ridge National Lab., TN (United States)] [and others

    1996-10-01

    A point mutation can be associated with the pathogenesis of inherited or acquired diseases. Laser desorption mass spectrometry coupled with allele specific polymerase chain reaction (PCR) was first used for point mutation detection. G551D is one of several mutations of the cystic fibrosis transmembrane conductance regulator (CFTR) gene present in 1-3% of the mutant CFTR alleles in most European populations. In this work, two different approaches were pursued to detect G551D point mutation in the cystic fibrosis gene. The strategy is to amplify the desired region of DNA template by PCR using two primers that overlap one base at the site of the point mutation and which vary in size. If the two primers based on the normal sequence match the target DNA sequence, a normal PCR product will be produced. However, if the alternately sized primers that match the mutant sequence recognize the target DNA, an abnormal PCR product will be produced. Thus, the mass spectrometer can be used to identify patients that are homozygous normal, heterozygous for a mutation or homozygous abnormal at a mutation site. Another approach to identify similar mutations is the use of sequence specific restriction enzymes which respond to changes in the DNA sequence. Mass spectrometry is used to detect the length of the restriction fragments generated by digestion of a PCR generated target fragment. 21 refs., 10 figs., 2 tabs.

  6. Laser desorption mass spectrometry for point mutation detection

    Energy Technology Data Exchange (ETDEWEB)

    Taranenko, N.I.; Chung, C.N.; Zhu, Y.F. [Oak Ridge National Lab., TN (United States)] [and others

    1996-12-31

    A point mutation can be associated with the pathogenesis of inherited or acquired diseases. Laser desorption mass spectrometry coupled with allele specific polymerase chain reaction (PCR) was first used for point mutation detection. G551D is one of several mutations of the cystic fibrosis transmembrane conductance regulator (CFTR) gene present in 1-3% of the mutant CFTR alleles in most European populations. In this work, two different approaches were pursued to detect G551D point mutation in the cystic fibrosis gene. The strategy is to amplify the desired region of DNA template by PCR using two primers that overlap one base at the site of the point mutation and which vary in size. If the two primers based on the normal sequence match the target DNA sequence, a normal PCR product will be produced. However, if the alternately sized primers that match the mutant sequence recognize the target DNA, an abnormal PCR product will be produced. Thus, the mass spectrometer can be used to identify patients that are homozygous normal, heterozygous for a mutation or homozygous abnormal at a mutation site. Another approach to identify similar mutations is the use of sequence specific restriction enzymes which respond to changes in the DNA sequence. Mass spectrometry is used to detect the length of the restriction fragments by digestion of a PCR generated target fragment. 21 refs., 10 figs., 2 tabs.

  7. Super-resolution for a point source using positive refraction

    Science.gov (United States)

    Miñano, Juan C.; Benítez, Pablo; González, Juan C.; Grabovičkić, Dejan; Ahmadpanahi, Hamed

    Leonhardt demonstrated (2009) that the 2D Maxwell Fish Eye lens (MFE) can focus perfectly 2D Helmholtz waves of arbitrary frequency, i.e., it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a receptor called "perfect drain" (PD) located at the corresponding MFE image point. The PD has the property of absorbing the complete radiation without radiation or scattering and it has been claimed as necessary to obtain super-resolution (SR) in the MFE. However, a prototype using a "drain" different from the PD has shown λ/5 resolution for microwave frequencies (Ma et al, 2010). Recently, the SR properties of a device equivalent to the MFE, called the Spherical Geodesic Waveguide (SGW) (Miñano et al, 2012) have been analyzed. The reported results show resolution up to λ /3000, for the SGW loaded with the perfect drain, and up to λ /500 for the SGW without perfect drain. The perfect drain was realized as a coaxial probe loaded with properly calculated impedance. The SGW provides SR only in a narrow band of frequencies close to the resonance Schumann frequencies. Here we analyze the SGW loaded with a small "perfect drain region" (González et al, 2011). This drain is designed as a region made of a material with complex permittivity. The comparative results show that there is no significant difference in the SR properties for both perfect drain designs.

  8. Trans-Z-source and Γ-Z-source neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Wei, Mo; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    Z-source neutral-point-clamped (NPC) inverters are earlier proposed for obtaining voltage buck-boost and three-level switching simultaneously. Their performances are, however, constrained by a trade-off between their input-to-output gain and modulation ratio. This trade-off can lead to high...

  9. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES: THE SOURCE CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Song; Liu, Jifeng; Qiu, Yanli; Bai, Yu; Yang, Huiqin; Guo, Jincheng; Zhang, Peng, E-mail: jfliu@bao.ac.cn, E-mail: songw@bao.ac.cn [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2016-06-01

    The Chandra archival data is a valuable resource for various studies on different X-ray astronomy topics. In this paper, we utilize this wealth of information and present a uniformly processed data set, which can be used to address a wide range of scientific questions. The data analysis procedures are applied to 10,029 Advanced CCD Imaging Spectrometer observations, which produces 363,530 source detections belonging to 217,828 distinct X-ray sources. This number is twice the size of the Chandra Source Catalog (Version 1.1). The catalogs in this paper provide abundant estimates of the detected X-ray source properties, including source positions, counts, colors, fluxes, luminosities, variability statistics, etc. Cross-correlation of these objects with galaxies shows that 17,828 sources are located within the D {sub 25} isophotes of 1110 galaxies, and 7504 sources are located between the D {sub 25} and 2 D {sub 25} isophotes of 910 galaxies. Contamination analysis with the log N –log S relation indicates that 51.3% of objects within 2 D {sub 25} isophotes are truly relevant to galaxies, and the “net” source fraction increases to 58.9%, 67.3%, and 69.1% for sources with luminosities above 10{sup 37}, 10{sup 38}, and 10{sup 39} erg s{sup −1}, respectively. Among the possible scientific uses of this catalog, we discuss the possibility of studying intra-observation variability, inter-observation variability, and supersoft sources (SSSs). About 17,092 detected sources above 10 counts are classified as variable in individual observation with the Kolmogorov–Smirnov (K–S) criterion ( P {sub K–S} < 0.01). There are 99,647 sources observed more than once and 11,843 sources observed 10 times or more, offering us a wealth of data with which to explore the long-term variability. There are 1638 individual objects (∼2350 detections) classified as SSSs. As a quite interesting subclass, detailed studies on X-ray spectra and optical spectroscopic follow-up are needed to

  10. A micro dew point sensor with a thermal detection principle

    Science.gov (United States)

    Kunze, M.; Merz, J.; Hummel, W.-J.; Glosch, H.; Messner, S.; Zengerle, R.

    2012-01-01

    We present a dew point temperature sensor with the thermal detection of condensed water on a thin membrane, fabricated by silicon micromachining. The membrane (600 × 600 × ~1 µm3) is part of a silicon chip and contains a heating element as well as a thermopile for temperature measurement. By dynamically heating the membrane and simultaneously analyzing the transient increase of its temperature it is detected whether condensed water is on the membrane or not. To cool the membrane down, a peltier cooler is used and electronically controlled in a way that the temperature of the membrane is constantly held at a value where condensation of water begins. This temperature is measured and output as dew point temperature. The sensor system works in a wide range of dew point temperatures between 1 K and down to 44 K below air temperature. In experimental investigations it could be proven that the deviation of the measured dew point temperatures compared to reference values is below ±0.2 K in an air temperature range of 22 to 70 °C. At low dew point temperatures of -20 °C (air temperature = 22 °C) the deviation increases to nearly -1 K.

  11. A micro dew point sensor with a thermal detection principle

    International Nuclear Information System (INIS)

    Kunze, M; Merz, J; Glosch, H; Messner, S; Zengerle, R; Hummel, W-J

    2012-01-01

    We present a dew point temperature sensor with the thermal detection of condensed water on a thin membrane, fabricated by silicon micromachining. The membrane (600 × 600 × ∼1 µm 3 ) is part of a silicon chip and contains a heating element as well as a thermopile for temperature measurement. By dynamically heating the membrane and simultaneously analyzing the transient increase of its temperature it is detected whether condensed water is on the membrane or not. To cool the membrane down, a peltier cooler is used and electronically controlled in a way that the temperature of the membrane is constantly held at a value where condensation of water begins. This temperature is measured and output as dew point temperature. The sensor system works in a wide range of dew point temperatures between 1 K and down to 44 K below air temperature. In experimental investigations it could be proven that the deviation of the measured dew point temperatures compared to reference values is below ±0.2 K in an air temperature range of 22 to 70 °C. At low dew point temperatures of −20 °C (air temperature = 22 °C) the deviation increases to nearly −1 K

  12. Power-Law Template for IR Point Source Clustering

    Science.gov (United States)

    Addison, Graeme E.; Dunkley, Joanna; Hajian, Amir; Viero, Marco; Bond, J. Richard; Das, Sudeep; Devlin, Mark; Halpern, Mark; Hincks, Adam; Hlozek, Renee; hide

    2011-01-01

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217,353,545 and 857 GHz, over angular scales 100 clustered power over the range of angular scales and frequencies considered is well fit by a simple power law of the form C_l\\propto I(sup -n) with n = 1.25 +/- 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, nu(sup beta) B(nu,T_eff), with a single emissivity index beta = 2.20 +/- 0.07 and effective temperature T_eff= 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be alpha_150-220 = 3.68 +/- 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in Cosmic Microwave Background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  13. Power-Law Template for Infrared Point-Source Clustering

    Science.gov (United States)

    Addison, Graeme E; Dunkley, Joanna; Hajian, Amir; Viero, Marco; Bond, J. Richard; Das, Sudeep; Devlin, Mark J.; Halpern, Mark; Hincks, Adam D; Hlozek, Renee; hide

    2012-01-01

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217, 353, 545, and 857 GHz, over angular scales 100 approx clustered power over the range of angular scales and frequencies considered is well fitted by a simple power law of the form C(sup clust)(sub l) varies as l (sub -n) with n = 1.25 +/- 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, ?(sup Beta)B(?, T(sub eff) ), with a single emissivity index Beta = 2.20 +/- 0.07 and effective temperature T(sub eff) = 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be alpha(sub 150-220) = 3.68 +/- 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in cosmic microwave background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  14. Some statistical problems inherent in radioactive-source detection

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1978-01-01

    Some of the statistical questions associated with problems of detecting random-point-process signals embedded in random-point-process noise are examined. An example of such a problem is that of searching for a lost radioactive source with a moving detection system. The emphasis is on theoretical questions, but some experimental and Monte Carlo results are used to test the theoretical results. Several idealized binary decision problems are treated by starting with simple, specific situations and progressing toward more general problems. This sequence of decision problems culminates in the minimum-cost-expectation rule for deciding between two Poisson processes with arbitrary intensity functions. As an example, this rule is then specialized to the detector-passing-a-point-source decision problem. Finally, Monte Carlo techniques are used to develop and test one estimation procedure: the maximum-likelihood estimation of a parameter in the intensity function of a Poisson process. For the Monte Carlo test this estimation procedure is specialized to the detector-passing-a-point-source case. Introductory material from probability theory is included so as to make the report accessible to those not especially conversant with probabilistic concepts and methods. 16 figures

  15. Fragmentation Point Detection of JPEG Images at DHT Using Validator

    Science.gov (United States)

    Mohamad, Kamaruddin Malik; Deris, Mustafa Mat

    File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.

  16. POWER-LAW TEMPLATE FOR INFRARED POINT-SOURCE CLUSTERING

    Energy Technology Data Exchange (ETDEWEB)

    Addison, Graeme E.; Dunkley, Joanna [Sub-department of Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Hajian, Amir; Das, Sudeep; Hincks, Adam D.; Page, Lyman A.; Staggs, Suzanne T. [Joseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Viero, Marco [Department of Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Bond, J. Richard [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON M5S 3H8 (Canada); Devlin, Mark J.; Reese, Erik D. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Halpern, Mark; Scott, Douglas [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z4 (Canada); Hlozek, Renee; Marriage, Tobias A.; Spergel, David N. [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544 (United States); Moodley, Kavilan [Astrophysics and Cosmology Research Unit, School of Mathematical Sciences, University of KwaZulu-Natal, Durban 4041 (South Africa); Wollack, Edward [NASA/Goddard Space Flight Center, Code 665, Greenbelt, MD 20771 (United States)

    2012-06-20

    We perform a combined fit to angular power spectra of unresolved infrared (IR) point sources from the Planck satellite (at 217, 353, 545, and 857 GHz, over angular scales 100 {approx}< l {approx}< 2200), the Balloon-borne Large-Aperture Submillimeter Telescope (BLAST; 250, 350, and 500 {mu}m; 1000 {approx}< l {approx}< 9000), and from correlating BLAST and Atacama Cosmology Telescope (ACT; 148 and 218 GHz) maps. We find that the clustered power over the range of angular scales and frequencies considered is well fitted by a simple power law of the form C{sup clust}{sub l}{proportional_to}l{sup -n} with n = 1.25 {+-} 0.06. While the IR sources are understood to lie at a range of redshifts, with a variety of dust properties, we find that the frequency dependence of the clustering power can be described by the square of a modified blackbody, {nu}{sup {beta}} B({nu}, T{sub eff}), with a single emissivity index {beta} = 2.20 {+-} 0.07 and effective temperature T{sub eff} = 9.7 K. Our predictions for the clustering amplitude are consistent with existing ACT and South Pole Telescope results at around 150 and 220 GHz, as is our prediction for the effective dust spectral index, which we find to be {alpha}{sub 150-220} = 3.68 {+-} 0.07 between 150 and 220 GHz. Our constraints on the clustering shape and frequency dependence can be used to model the IR clustering as a contaminant in cosmic microwave background anisotropy measurements. The combined Planck and BLAST data also rule out a linear bias clustering model.

  17. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    CERN Document Server

    Lu Cun Heng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving cou...

  18. Low energy electron point source microscopy: beyond imaging

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, Andre; Goelzhaeuser, Armin [Physics of Supramolecular Systems and Surfaces, University of Bielefeld, Postfach 100131, 33501 Bielefeld (Germany)

    2010-09-01

    Low energy electron point source (LEEPS) microscopy has the capability to record in-line holograms at very high magnifications with a fairly simple set-up. After the holograms are numerically reconstructed, structural features with the size of about 2 nm can be resolved. The achievement of an even higher resolution has been predicted. However, a number of obstacles are known to impede the realization of this goal, for example the presence of electric fields around the imaged object, electrostatic charging or radiation induced processes. This topical review gives an overview of the achievements as well as the difficulties in the efforts to shift the resolution limit of LEEPS microscopy towards the atomic level. A special emphasis is laid on the high sensitivity of low energy electrons to electrical fields, which limits the structural determination of the imaged objects. On the other hand, the investigation of the electrical field around objects of known structure is very useful for other tasks and LEEPS microscopy can be extended beyond the task of imaging. The determination of the electrical resistance of individual nanowires can be achieved by a proper analysis of the corresponding LEEPS micrographs. This conductivity imaging may be a very useful application for LEEPS microscopes. (topical review)

  19. BEAMLINE-CONTROLLED STEERING OF SOURCE-POINT ANGLE AT THE ADVANCED PHOTON SOURCE

    Energy Technology Data Exchange (ETDEWEB)

    Emery, L.; Fystro, G.; Shang, H.; Smith, M.

    2017-06-25

    An EPICS-based steering software system has been implemented for beamline personnel to directly steer the angle of the synchrotron radiation sources at the Advanced Photon Source. A script running on a workstation monitors "start steering" beamline EPICS records, and effects a steering given by the value of the "angle request" EPICS record. The new system makes the steering process much faster than before, although the older steering protocols can still be used. The robustness features of the original steering remain. Feedback messages are provided to the beamlines and the accelerator operators. Underpinning this new steering protocol is the recent refinement of the global orbit feedback process whereby feedforward of dipole corrector set points and orbit set points are used to create a local steering bump in a rapid and seamless way.

  20. Growth Curve Analysis and Change-Points Detection in Extremes

    KAUST Repository

    Meng, Rui

    2016-05-15

    The thesis consists of two coherent projects. The first project presents the results of evaluating salinity tolerance in barley using growth curve analysis where different growth trajectories are observed within barley families. The study of salinity tolerance in plants is crucial to understanding plant growth and productivity. Because fully-automated smarthouses with conveyor systems allow non-destructive and high-throughput phenotyping of large number of plants, it is now possible to apply advanced statistical tools to analyze daily measurements and to study salinity tolerance. To compare different growth patterns of barley variates, we use functional data analysis techniques to analyze the daily projected shoot areas. In particular, we apply the curve registration method to align all the curves from the same barley family in order to summarize the family-wise features. We also illustrate how to use statistical modeling to account for spatial variation in microclimate in smarthouses and for temporal variation across runs, which is crucial for identifying traits of the barley variates. In our analysis, we show that the concentrations of sodium and potassium in leaves are negatively correlated, and their interactions are associated with the degree of salinity tolerance. The second project studies change-points detection methods in extremes when multiple time series data are available. Motived by the scientific question of whether the chances to experience extreme weather are different in different seasons of a year, we develop a change-points detection model to study changes in extremes or in the tail of a distribution. Most of existing models identify seasons from multiple yearly time series assuming a season or a change-point location remains exactly the same across years. In this work, we propose a random effect model that allows the change-point to vary from year to year, following a given distribution. Both parametric and nonparametric methods are developed

  1. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  2. Nutrient Losses from Non-Point Sources or from Unidentified Point Sources? Application Examples of the Smartphone Based Nitrate App.

    Science.gov (United States)

    Rozemeijer, J.; Ekkelenkamp, R.; van der Zaan, B.

    2017-12-01

    In 2016 Deltares launched the free to use Nitrate App which accurately reads and interprets nitrate test strips. The app directly displays the measured concentration and gives the option to share the result. Shared results are visualised in map functionality within the app and online. Since its introduction we've been seeing an increasing number of nitrate app applications. In this presentation we show some unanticipated types of application. The Nitrate App was originally intended to enable farmers to measure nitrate concentrations on their own farms. This may encourage farmers to talk to specialists about the right nutrient best management practices (BMP's) for their farm. Several groups of farmers have recently started to apply the Nitrate App and to discuss their results with each other and with the authorities. Nitrate concentration routings in catchments have proven to be another useful application. Within a day a person can generate a catchment scale nitrate concentration map identifying nitrate loss hotspots. In several routings in agricultural catchments clear point sources were found, for example at small scale manure processing plants. These routings proved that the Nitrate App can help water managers to target conservation practices more accurately to areas with the highest nitrate concentrations and loads. Other current applications are the screening of domestic water wells in California, the collection of extra measurements (also pH and NH4) in the National Monitoring Network for the Evaluation of the Manure Policy in the Netherlands, and several educational initiatives in cooperation with schools and universities.

  3. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    Science.gov (United States)

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  4. Comparing Two Approaches for Point-Like Scatterer Detection

    Directory of Open Access Journals (Sweden)

    Angela Dell’Aversano

    2015-01-01

    Full Text Available Many inverse scattering problems concern the detection and localisation of point-like scatterers which are sparsely enclosed within a prescribed investigation domain. Therefore, it looks like a good option to tackle the problem by applying reconstruction methods that are properly tailored for such a type of scatterers or that naturally enforce sparsity in the reconstructions. Accordingly, in this paper we compare the time reversal-MUSIC and the compressed sensing. The study develops through numerical examples and focuses on the role of noise in data and mutual coupling between the scatterers.

  5. Constructing an optimal decision tree for FAST corner point detection

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    In this paper, we consider a problem that is originated in computer vision: determining an optimal testing strategy for the corner point detection problem that is a part of FAST algorithm [11,12]. The problem can be formulated as building a decision tree with the minimum average depth for a decision table with all discrete attributes. We experimentally compare performance of an exact algorithm based on dynamic programming and several greedy algorithms that differ in the attribute selection criterion. © 2011 Springer-Verlag.

  6. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  7. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  8. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  9. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  10. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  11. Impact of point source pollution on groundwater quality

    International Nuclear Information System (INIS)

    Gill, M.A.; Solehria, B.A.; Rai, N.I.

    2005-01-01

    The management of point source pollution (municipal and industrial waste water) is an important item on Brown Agenda confronting urban planners and policy makers. The industrial concerns and households produce enormous amount of waste water, which has to be disposed of through the municipal sewage system. Generally, municipal wastewater management is done on non-scientific lines, resulting in considerable social and economic loss and gradual degradation of the natural resources. The present study highlights that how the poor management practices, lack of infrastructure, and poor disposal system-comprising of mostly open, un-walled or partially lined drains, affect the groundwater quality and render it unfit for human consumption. Satiana Road sludge carrier at Faisalabad city, receiving effluents of about 67 textile units, 4 oil mills, 2 ice factories, 3 laundris and domestic waste water of Peoples Colony No.1, Maqbool Road and Ghulam Rasool Nagar was selected to derive quantitative and qualitative estimates of TDS, Na, Cl and heavy metals namely Fe, Cu and Pb of the waste water and their leaching around the sludge carrier. The measurement of leaching of TDS, Na/sup +/, and Cl/sup -1/ per 1000 m basis in lined section was 818, 550 and 228 tons, respectively. Where as in the unlined section, annual increase of TDS, Na/sup /+, and Cl/sup -/ was 2404,1615 and 669 tons per 1000 m respectively. In case of leaching of metals through the sludge carrier, Cu was at the top with 8.4 tons per annum per 1000 m followed by Fe and Pb with 6.66 and 1.2 tons per annum per 1000 m respectively. The concentration of all the salts/metals studied were higher in groundwater near the sludge carrier which decreased with increase in distance. The groundwater contamination in unlined portions is greater than lined portions, which might be due to higher seepage losses in unlined portions of the sludge carrier (4.9 % per 1000 m) as compared to relatively low seepage losses in lined portion of

  12. IceCube-Gen2 sensitivity improvement for steady neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan; Resconi, Elisa [TU Muenchen, Physik-Department, Excellence Cluster Universe, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    The observation of an astrophysical neutrino flux by high-energy events starting in IceCube strengthens the search for sources of astrophysical neutrinos. Identification of these sources requires good pointing at high statistics, mainly using muons created by charged-current muon neutrino interactions going through the IceCube detector. We report about preliminary studies of a possible high-energy extension IceCube-Gen2. Using a 6 times bigger detection volume, effective area as well as reconstruction accuracy will improve with respect to IceCube. Moreover, using (in-ice) active veto techniques will significantly improve the performance for Southern hemisphere events, where possible local candidate neutrino sources are located.

  13. Indirect detection of radiation sources through direct detection of radiolysis products

    Science.gov (United States)

    Farmer, Joseph C [Tracy, CA; Fischer, Larry E [Los Gatos, CA; Felter, Thomas E [Livermore, CA

    2010-04-20

    A system for indirectly detecting a radiation source by directly detecting radiolytic products. The radiation source emits radiation and the radiation produces the radiolytic products. A fluid is positioned to receive the radiation from the radiation source. When the fluid is irradiated, radiolytic products are produced. By directly detecting the radiolytic products, the radiation source is detected.

  14. UHE γ-rays from point sources based on GRAPES-I observations

    International Nuclear Information System (INIS)

    Gupta, S.K.; Sreekantan, B.V.; Srivatsan, R.; Tonwar, S.C.

    1993-01-01

    An experiment called GRAPES I (Gamma Ray Astronomy at PeV EnergieS) was set up in 1984 at Ooty in India, using 24 scintillation counters, to detect Extensive Air Showers (EAS) produced in the atmosphere by the primary cosmic radiation. The goal of the experiment has been to search for Ultra High Energy (UHE) γ-rays (E≥10 14 eV) from point sources in the sky. Here we discuss the results on X-ray binaries CYG X-3, HER X-1 and SCO X-1 obtained with GRAPES I experiment which covers the period 1984--87

  15. Determination of disintegration rates of a 60Co point source and volume sources by the sum-peak method

    International Nuclear Information System (INIS)

    Kawano, Takao; Ebihara, Hiroshi

    1990-01-01

    The disintegration rates of 60 Co as a point source (<2 mm in diameter on a thin plastic disc) and volume sources (10-100 mL solutions in a polyethylene bottle) are determined by the sum-peak method. The sum-peak formula gives the exact disintegration rate for the point source at different positions from the detector. However, increasing the volume of the solution results in enlarged deviations from the true disintegration rate. Extended sources must be treated as an amalgam of many point sources. (author)

  16. Characteristics of infrared point sources associated with OH masers

    International Nuclear Information System (INIS)

    Mu Jimang; Esimbek, Jarken; Zhou Jianjun; Zhang Haijuan

    2010-01-01

    We collect 3249 OH maser sources from the literature published up to April 2007, and compile a new catalog of OH masers. We look for the exciting sources of these masers and their infrared properties from IRAS and MSX data, and make a statistical study. MSX sources associated with stellar 1612 MHz OH masers are located mainly above the blackbody line; this is caused by the dust absorption of stellar envelopes, especially in the MSX A band. The mid-IR sources associated with stellar OH masers are concentrated in a small region in an [A]-[D] vs. [A]-[E] diagram with a small fraction of contamination; this gives us a new criterion to search for new stellar OH masers and distinguish stellar masers from unknown types of OH masers. IR sources associated with 1612 MHz stellar OH masers show an expected result: the average flux of sources with F60 > F25 increases with increasing wavelength, while those with F60 F25.

  17. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    Science.gov (United States)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  18. Tapping the zero-point energy as an energy source

    International Nuclear Information System (INIS)

    King, M.B.

    1991-01-01

    This paper reports that the hypothesis for tapping the zero-point energy (ZPE) arises by combining the theories of the ZPE with the theories of system self-organization. The vacuum polarization of atomic nuclei might allow their synchronous motion to activate a ZPE coherence. Experimentally observed plasma ion-acoustic anomalies as well as inventions utilizing cycloid ion motions may offer supporting evidence. The suggested experiment of rapidly circulating a charged plasma in a vortex ring might induce a sufficient zero-point energy interaction to manifest a gravitational anomaly. An invention utilizing abrupt E field rotation to create virtual charge exhibits excessive energy output

  19. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    Science.gov (United States)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in

  20. Characterization of non point source pollutants and their dispersion ...

    African Journals Online (AJOL)

    EJIRO

    landing site in Uganda. N. Banadda. Agricultural and Bio-Systems Engineering Department, Makerere University, P. O. Box 7062, Kampala, Uganda. E-mail: banadda@agric.mak.ac.ug. Fax: +256-414-53.16.41. Accepted 5 January, 2011. The aim of this research is to characterize non point pollutants and their dispersion in ...

  1. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  2. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    Science.gov (United States)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  3. Nomogram for Determining Shield Thickness for Point and Line Sources of Gamma Rays

    International Nuclear Information System (INIS)

    Joenemalm, C.; Malen, K

    1966-10-01

    A set of nomograms is given for the determination of the required shield thickness against gamma radiation. The sources handled are point and infinite line sources with shields of Pb, Fe, magnetite concrete (p = 3.6), ordinary concrete (p = 2.3) or water. The gamma energy range covered is 0.5 - 10 MeV. The nomograms are directly applicable for source and dose points on the surfaces of the shield. They can easily be extended to source and dose points in other positions by applying a geometrical correction. Also included are data for calculation of the source strength for the most common materials and for fission product sources

  4. Nomogram for Determining Shield Thickness for Point and Line Sources of Gamma Rays

    Energy Technology Data Exchange (ETDEWEB)

    Joenemalm, C; Malen, K

    1966-10-15

    A set of nomograms is given for the determination of the required shield thickness against gamma radiation. The sources handled are point and infinite line sources with shields of Pb, Fe, magnetite concrete (p = 3.6), ordinary concrete (p = 2.3) or water. The gamma energy range covered is 0.5 - 10 MeV. The nomograms are directly applicable for source and dose points on the surfaces of the shield. They can easily be extended to source and dose points in other positions by applying a geometrical correction. Also included are data for calculation of the source strength for the most common materials and for fission product sources.

  5. Emissions of perfluorinated alkylated substances (PFAS) from point sources--identification of relevant branches.

    Science.gov (United States)

    Clara, M; Scheffknecht, C; Scharf, S; Weiss, S; Gans, O

    2008-01-01

    Effluents of wastewater treatment plants are relevant point sources for the emission of hazardous xenobiotic substances to the aquatic environment. One group of substances, which recently entered scientific and political discussions, is the group of the perfluorinated alkylated substances (PFAS). The most studied compounds from this group are perfluorooctanoic acid (PFOA) and perfluorooctane sulphonate (PFOS), which are the most important degradation products of PFAS. These two substances are known to be persistent, bioaccumulative and toxic (PBT). In the present study, eleven PFAS were investigated in effluents of municipal wastewater treatment plants (WWTP) and in industrial wastewaters. PFOS and PFOA proved to be the dominant compounds in all sampled wastewaters. Concentrations of up to 340 ng/L of PFOS and up to 220 ng/L of PFOA were observed. Besides these two compounds, perfluorohexanoic acid (PFHxA) was also present in nearly all effluents and maximum concentrations of up to 280 ng/L were measured. Only N-ethylperfluorooctane sulphonamide (N-EtPFOSA) and its degradation/metabolisation product perfluorooctane sulphonamide (PFOSA) were either detected below the limit of quantification or were not even detected at all. Beside the effluents of the municipal WWTPs, nine industrial wastewaters from six different industrial branches were also investigated. Significantly, the highest emissions or PFOS were observed from metal industry whereas paper industry showed the highest PFOA emission. Several PFAS, especially perfluorononanoic acid (PFNA), perfluorodecanoic acid (PFDA), perfluorododecanoic acid (PFDoA) and PFOS are predominantly emitted from industrial sources, with concentrations being a factor of 10 higher than those observed in the municipal WWTP effluents. Perfluorodecane sulphonate (PFDS), N-Et-PFOSA and PFOSA were not detected in any of the sampled industrial point sources. (c) IWA Publishing 2008.

  6. Tokamak startup using point-source dc helicity injection.

    Science.gov (United States)

    Battaglia, D J; Bongard, M W; Fonck, R J; Redd, A J; Sontag, A C

    2009-06-05

    Startup of a 0.1 MA tokamak plasma is demonstrated on the ultralow aspect ratio Pegasus Toroidal Experiment using three localized, high-current density sources mounted near the outboard midplane. The injected open field current relaxes via helicity-conserving magnetic turbulence into a tokamaklike magnetic topology where the maximum sustained plasma current is determined by helicity balance and the requirements for magnetic relaxation.

  7. The Central Point Source in G76.9++1.0 V. R. Marthi1,∗ , J. N. ...

    Indian Academy of Sciences (India)

    Astr. (2011) 32, 451–455 c Indian Academy of Sciences. The Central Point Source in G76.9++1.0. V. R. Marthi1,∗. , J. N. Chengalur1, Y. Gupta1 ... emission has indeed been seen at 2 GHz with the Green Bank Telescope. (GBT), establishing the fact that scattering is responsible for its non- detection at low radio frequencies.

  8. Point Sources of Emerging Contaminants Along the Colorado River Basin: Impact on Water Use and Reuse in the Arid Southwest

    Science.gov (United States)

    Emerging contaminants (ECs) (e.g., pharmaceuticals, illicit drugs, personal care products) have been detected in waters across the United States. The objective of this study was to evaluate point sources of ECs along the Colorado River, from the headwaters in Colorado to the Gulf...

  9. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  10. Point sources of emerging contaminants along the Colorado River Basin: Source water for the arid Southwestern United States

    Science.gov (United States)

    Jones-Lepp, Tammy L.; Sanchez, Charles; Alvarez, David A.; Wilson, Doyle C.; Taniguchi-Fu, Randi-Laurant

    2012-01-01

    Emerging contaminants (ECs) (e.g., pharmaceuticals, illicit drugs, personal care products) have been detected in waters across the United States. The objective of this study was to evaluate point sources of ECs along the Colorado River, from the headwaters in Colorado to the Gulf of California. At selected locations in the Colorado River Basin (sites in Colorado, Utah, Nevada, Arizona, and California), waste stream tributaries and receiving surface waters were sampled using either grab sampling or polar organic chemical integrative samplers (POCIS). The grab samples were extracted using solid-phase cartridge extraction (SPE), and the POCIS sorbents were transferred into empty SPEs and eluted with methanol. All extracts were prepared for, and analyzed by, liquid chromatography–electrospray-ion trap mass spectrometry (LC–ESI-ITMS). Log DOW values were calculated for all ECs in the study and compared to the empirical data collected. POCIS extracts were screened for the presence of estrogenic chemicals using the yeast estrogen screen (YES) assay. Extracts from the 2008 POCIS deployment in the Las Vegas Wash showed the second highest estrogenicity response. In the grab samples, azithromycin (an antibiotic) was detected in all but one urban waste stream, with concentrations ranging from 30 ng/L to 2800 ng/L. Concentration levels of azithromycin, methamphetamine and pseudoephedrine showed temporal variation from the Tucson WWTP. Those ECs that were detected in the main surface water channels (those that are diverted for urban use and irrigation along the Colorado River) were in the region of the limit-of-detection (e.g., 10 ng/L), but most were below detection limits.

  11. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  12. Strategies for lidar characterization of particulates from point and area sources

    Science.gov (United States)

    Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry

    2010-10-01

    Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.

  13. Impacts by point and diffuse micropollutant sources on the stream water quality at catchment scale

    Science.gov (United States)

    Petersen, M. F.; Eriksson, E.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    The water quality of surface waters is threatened by multiple anthropogenic pollutants and the large variety of pollutants challenges the monitoring and assessment of the water quality. The aim of this study was to characterize and quantify both point and diffuse sources of micropollutants impacting the water quality of a stream at catchment scale. Grindsted stream in western Jutland, Denmark was used as a study site. The stream passes both urban and agricultural areas and is impacted by severe groundwater contamination in Grindsted city. Along a 12 km reach of Grindsted stream, the potential pollution sources were identified including a pharmaceutical factory site with a contaminated old drainage ditch, two waste deposits, a wastewater treatment plant, overflow structures, fish farms, industrial discharges and diffuse agricultural and urban sources. Six water samples were collected along the stream and analyzed for general water quality parameters, inorganic constituents, pesticides, sulfonamides, chlorinated solvents, BTEXs, and paracetamol and ibuprofen. The latter two groups were not detected. The general water quality showed typical conditions for a stream in western Jutland. Minor impacts by releases of organic matter and nutrients were found after the fish farms and the waste water treatment plant. Nickel was found at concentrations 5.8 - 8.8 μg/l. Nine pesticides and metabolites of both agricultural and urban use were detected along the stream; among these were the two most frequently detected and some rarely detected pesticides in Danish water courses. The concentrations were generally consistent with other findings in Danish streams and in the range 0.01 - 0.09 μg/l; except for metribuzin-diketo that showed high concentrations up to 0.74 μg/l. The groundwater contamination at the pharmaceutical factory site, the drainage ditch and the waste deposits is similar in composition containing among others sulfonamides and chlorinated solvents (including vinyl

  14. [A landscape ecological approach for urban non-point source pollution control].

    Science.gov (United States)

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  15. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    International Nuclear Information System (INIS)

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  16. Calculation of dose for β point and sphere sources in soft tissue

    International Nuclear Information System (INIS)

    Sun Fuyin; Yuan Shuyu; Tan Jian

    1999-01-01

    Objective: To compare the results of the distribution of dose rate calculated by three typical methods for point source and sphere source of β nuclide. Methods: Calculating and comparing the distributions of dose rate from 32 P β point and sphere sources in soft tissue calculated by the three methods published in references, [1]. [2] and [3], respectively. Results: For the point source of 3.7 x 10 7 Bq (1mCi), the variations of the calculation results of the three formulas are within 10% if r≤0.35 g/cm 2 , r being the distance from source, and larger than 10% if r > 0.35 g/cm 2 . For the sphere source whose volume is 50 μl and activity is 3.7 x 10 7 Bq(1 mCi), the variations are within 10% if z≤0.15 g/cm 2 , z being the distance from the surface of the sphere source to a point outside the sphere. Conclusion: The agreement of the distributions of the dose rate calculated by the three methods mentioned above for point and sphere β source are good if the distances from point source or the surface of sphere source to the points observed are small, and poor if they are large

  17. Extraction of Point Source Gamma Signals from Aerial Survey Data Taken over a Las Vegas Nevada, Residential Area

    International Nuclear Information System (INIS)

    Thane J. Hendricks

    2007-01-01

    Detection of point-source gamma signals from aerial measurements is complicated by widely varying terrestrial gamma backgrounds, since these variations frequently resemble signals from point-sources. Spectral stripping techniques have been very useful in separating man-made and natural radiation contributions which exist on Energy Research and Development Administration (ERDA) plant sites and other like facilities. However, these facilities are generally situated in desert areas or otherwise flat terrain with few man-made structures to disturb the natural background. It is of great interest to determine if the stripping technique can be successfully applied in populated areas where numerous man-made disturbances (houses, streets, yards, vehicles, etc.) exist

  18. Intrusion Detection using Open Source Tools

    OpenAIRE

    Jack TIMOFTE

    2008-01-01

    We have witnessed in the recent years that open source tools have gained popularity among all types of users, from individuals or small businesses to large organizations and enterprises. In this paper we will present three open source IDS tools: OSSEC, Prelude and SNORT.

  19. The Atacama Cosmology Telescope: Development and preliminary results of point source observations

    Science.gov (United States)

    Fisher, Ryan P.

    2009-06-01

    The Atacama Cosmology Telescope (ACT) is a six meter diameter telescope designed to measure the millimeter sky with arcminute angular resolution. The instrument is currently conducting its third season of observations from Cerro Toco in the Chilean Andes. The primary science goal of the experiment is to expand our understanding of cosmology by mapping the temperature fluctuations of the Cosmic Microwave Background (CMB) at angular scales corresponding to multipoles up to [cursive l] ~ 10000. The primary receiver for current ACT observations is the Millimeter Bolometer Array Camera (MBAC). The instrument is specially designed to observe simultaneously at 148 GHz, 218 GHz and 277 GHz. To accomplish this, the camera has three separate detector arrays, each containing approximately 1000 detectors. After discussing the ACT experiment in detail, a discussion of the development and testing of the cold readout electronics for the MBAC is presented. Currently, the ACT collaboration is in the process of generating maps of the microwave sky using our first and second season observations. The analysis used to generate these maps requires careful data calibration to produce maps of the arcminute scale CMB temperature fluctuations. Tests and applications of several elements of the ACT calibrations are presented in the context of the second season observations. Scientific exploration has already begun on preliminary maps made using these calibrations. The final portion of this thesis is dedicated to discussing the point sources observed by the ACT. A discussion of the techniques used for point source detection and photometry is followed by a presentation of our current measurements of point source spectral indices.

  20. Anthropogenic Methane Emissions in California's San Joaquin Valley: Characterizing Large Point Source Emitters

    Science.gov (United States)

    Hopkins, F. M.; Duren, R. M.; Miller, C. E.; Aubrey, A. D.; Falk, M.; Holland, L.; Hook, S. J.; Hulley, G. C.; Johnson, W. R.; Kuai, L.; Kuwayama, T.; Lin, J. C.; Thorpe, A. K.; Worden, J. R.; Lauvaux, T.; Jeong, S.; Fischer, M. L.

    2015-12-01

    Methane is an important atmospheric pollutant that contributes to global warming and tropospheric ozone production. Methane mitigation could reduce near term climate change and improve air quality, but is hindered by a lack of knowledge of anthropogenic methane sources. Recent work has shown that methane emissions are not evenly distributed in space, or across emission sources, suggesting that a large fraction of anthropogenic methane comes from a few "super-emitters." We studied the distribution of super-emitters in California's southern San Joaquin Valley, where elevated levels of atmospheric CH4 have also been observed from space. Here, we define super-emitters as methane plumes that could be reliably detected (i.e., plume observed more than once in the same location) under varying wind conditions by airborne thermal infrared remote sensing. The detection limit for this technique was determined to be 4.5 kg CH4 h-1 by a controlled release experiment, corresponding to column methane enhancement at the point of emissions greater than 20% above local background levels. We surveyed a major oil production field, and an area with a high concentration of large dairies using a variety of airborne and ground-based measurements. Repeated airborne surveys (n=4) with the Hyperspectral Thermal Emission Spectrometer revealed 28 persistent methane plumes emanating from oil field infrastructure, including tanks, wells, and processing facilities. The likelihood that a given source type was a super-emitter varied from roughly 1/3 for processing facilities to 1/3000 for oil wells. 11 persistent plumes were detected in the dairy area, and all were associated with wet manure management. The majority (11/14) of manure lagoons in the study area were super-emitters. Comparing to a California methane emissions inventory for the surveyed areas, we estimate that super-emitters comprise a minimum of 9% of inventoried dairy emissions, and 13% of inventoried oil emissions in this region.

  1. Simultaneous Determination of Source Wavelet and Velocity Profile Using Impulsive Point-Source Reflections from a Layered Fluid

    National Research Council Canada - National Science Library

    Bube, K; Lailly, P; Sacks, P; Santosa, F; Symes, W. W

    1987-01-01

    .... We show that a quasi-impulsive, isotropic point source may be recovered simultaneously with the velocity profile from reflection data over a layered fluid, in linear (perturbation) approximation...

  2. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  3. Point source search techniques in ultra high energy gamma ray astronomy

    International Nuclear Information System (INIS)

    Alexandreas, D.E.; Biller, S.; Dion, G.M.; Lu, X.Q.; Yodh, G.B.; Berley, D.; Goodman, J.A.; Haines, T.J.; Hoffman, C.M.; Horch, E.; Sinnis, C.; Zhang, W.

    1993-01-01

    Searches for point astrophysical sources of ultra high energy (UHE) gamma rays are plagued by large numbers of background events from isotropic cosmic rays. Some of the methods that have been used to estimate the expected number of background events coming from the direction of a possible source are found to contain biases. Search techniques that avoid this problem are described. There is also a discussion of how to optimize the sensitivity of a search to emission from a point source. (orig.)

  4. Ionizing radiation source detection by personal TLD

    International Nuclear Information System (INIS)

    Marinkovic, O.; Mirkov, Z.

    2002-01-01

    The Laboratory for personal dosimetry has about 3000 workers under control. The most of them work in medicine. Some institutions, as big health centers, have different ionizing radiation sources. It is usefull to analyze what has been the source of irradiation, special when appears a dosimeter with high dose. Personal dosimetry equipment is Harshaw TLD Reader Model 6600 and dosimeters consist of two chips LiF TLD-100 assembled in bar-coded cards which are wearing in holders with one tissue-equivalent filter (to determine H(10)) and skin-equivalent the other (to determine H(0.07)). The calibration dosimeters have been irradiated in holders by different sources: x-ray (for 80keV and 100keV), 6 0C o, 9 0S r (for different distances from beta source) and foton beem (at radiotherapy accelerator by 6MeV, 10MeV and 18MeV). The dose ratio for two LiF cristals was calculated and represented with graphs. So, it is possible to calculate the ratio H(10)/H(0.07) for a personal TLD and analyze what has been the source of irradiation. Also, there is the calibration for determination the time of irradiation, according to glow curve deconvolution

  5. Optical identifications of IRAS point sources: the Fornax, Hydra I and Coma clusters

    International Nuclear Information System (INIS)

    Wang, G.; Leggett, S.K.; Savage, A.

    1991-01-01

    We present optical identifications for 66 IRAS point sources in the region of the Fornax cluster of galaxies, 106 IRAS point sources in the region of the Hydra I cluster of galaxies (Abell 1060) and 59 IRAS point sources in the region of the Coma cluster of galaxies (Abell 1656). Eight other sources in Hydra I do not have optical counterparts and are very probably due to infrared cirrus. Twenty-three (35 per cent) of the Fornax sources are associated with stars and 43 (65 per cent) with galaxies; 48 (42 per cent) of the Hydra I sources are associated with stars and 58 (51 per cent) with galaxies; 18 (31 per cent) of the Coma sources are associated with stars and 41 (69 per cent) with galaxies. The stellar and infrared cirrus surface density is consistent with the galactic latitude of each field. (author)

  6. 75 FR 10438 - Effluent Limitations Guidelines and Standards for the Construction and Development Point Source...

    Science.gov (United States)

    2010-03-08

    ... Effluent Limitations Guidelines and Standards for the Construction and Development Point Source Category... technology-based Effluent Limitations Guidelines and New Source Performance Standards for the Construction... technology-based Effluent Limitations Guidelines and New Source Performance Standards for the Construction...

  7. Mycotoxins: diffuse and point source contributions of natural contaminants of emerging concern to streams

    Science.gov (United States)

    Kolpin, Dana W.; Schenzel, Judith; Meyer, Michael T.; Phillips, Patrick J.; Hubbard, Laura E.; Scott, Tia-Marie; Bucheli, Thomas D.

    2014-01-01

    To determine the prevalence of mycotoxins in streams, 116 water samples from 32 streams and three wastewater treatment plant effluents were collected in 2010 providing the broadest investigation on the spatial and temporal occurrence of mycotoxins in streams conducted in the United States to date. Out of the 33 target mycotoxins measured, nine were detected at least once during this study. The detections of mycotoxins were nearly ubiquitous during this study even though the basin size spanned four orders of magnitude. At least one mycotoxin was detected in 94% of the 116 samples collected. Deoxynivalenol was the most frequently detected mycotoxin (77%), followed by nivalenol (59%), beauvericin (43%), zearalenone (26%), β-zearalenol (20%), 3-acetyl-deoxynivalenol (16%), α-zearalenol (10%), diacetoxyscirpenol (5%), and verrucarin A (1%). In addition, one or more of the three known estrogenic compounds (i.e. zearalenone, α-zearalenol, and β-zearalenol) were detected in 43% of the samples, with maximum concentrations substantially higher than observed in previous research. While concentrations were generally low (i.e. < 50 ng/L) during this study, concentrations exceeding 1000 ng/L were measured during spring snowmelt conditions in agricultural settings and in wastewater treatment plant effluent. Results of this study suggest that both diffuse (e.g. release from infected plants and manure applications from exposed livestock) and point (e.g. wastewater treatment plants and food processing plants) sources are important environmental pathways for mycotoxin transport to streams. The ecotoxicological impacts from the long-term, low-level exposures to mycotoxins alone or in combination with complex chemical mixtures are unknown

  8. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  9. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    Science.gov (United States)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  10. Human detection and motion analysis at security points

    Science.gov (United States)

    Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.

    2003-08-01

    This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.

  11. KNOWLEDGE-BASED OBJECT DETECTION IN LASER SCANNING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    F. Boochs

    2012-07-01

    Full Text Available Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This “understanding” enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL, used for formulating the knowledge base and the Semantic Web Rule Language (SWRL with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists’ knowledge of the scene and algorithmic processing.

  12. LOWERING ICECUBE'S ENERGY THRESHOLD FOR POINT SOURCE SEARCHES IN THE SOUTHERN SKY

    Energy Technology Data Exchange (ETDEWEB)

    Aartsen, M. G. [Department of Physics, University of Adelaide, Adelaide, 5005 (Australia); Abraham, K. [Physik-department, Technische Universität München, D-85748 Garching (Germany); Ackermann, M. [DESY, D-15735 Zeuthen (Germany); Adams, J. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch (New Zealand); Aguilar, J. A.; Ansseau, I. [Université Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels (Belgium); Ahlers, M. [Department of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI 53706 (United States); Ahrens, M. [Oskar Klein Centre and Department of Physics, Stockholm University, SE-10691 Stockholm (Sweden); Altmann, D.; Anton, G. [Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, D-91058 Erlangen (Germany); Andeen, K. [Department of Physics, Marquette University, Milwaukee, WI, 53201 (United States); Anderson, T.; Arlen, T. C. [Department of Physics, Pennsylvania State University, University Park, PA 16802 (United States); Archinger, M.; Baum, V. [Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz (Germany); Arguelles, C. [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Auffenberg, J. [III. Physikalisches Institut, RWTH Aachen University, D-52056 Aachen (Germany); Bai, X. [Physics Department, South Dakota School of Mines and Technology, Rapid City, SD 57701 (United States); Barwick, S. W. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Bay, R., E-mail: jacob.feintzeig@gmail.com, E-mail: naoko@icecube.wisc.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States); Collaboration: IceCube Collaboration; and others

    2016-06-20

    Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retaining efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.

  13. Trend analysis and change point detection of annual and seasonal ...

    Indian Academy of Sciences (India)

    elevation ranges from 0 m in the coastal areas of the. Persian Gulf to over ... been explained by Kang and Yusof (2012); Dhorde ...... J L 2004 Detection of statistically significant trends in ... Sun H, Chen Y, Li W, Li F, Chen Y, Hao X and Yang Y.

  14. Can Detectability Analysis Improve the Utility of Point Counts for Temperate Forest Raptors?

    Science.gov (United States)

    Temperate forest breeding raptors are poorly represented in typical point count surveys because these birds are cryptic and typically breed at low densities. In recent years, many new methods for estimating detectability during point counts have been developed, including distanc...

  15. Thermal Analysis of a Cracked Half-plane under Moving Point Heat Source

    Directory of Open Access Journals (Sweden)

    He Kuanfang

    2017-09-01

    Full Text Available The heat conduction in half-plane with an insulated crack subjected to moving point heat source is investigated. The analytical solution and the numerical means are combined to analyze the transient temperature distribution of a cracked half-plane under moving point heat source. The transient temperature distribution of the half plane structure under moving point heat source is obtained by the moving coordinate method firstly, then the heat conduction equation with thermal boundary of an insulated crack face is changed to singular integral equation by applying Fourier transforms and solved by the numerical method. The numerical examples of the temperature distribution on the cracked half-plane structure under moving point heat source are presented and discussed in detail.

  16. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    Directory of Open Access Journals (Sweden)

    Vaishali Bane

    2014-02-01

    Full Text Available Tetrodotoxin (TTX is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection.

  17. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    Science.gov (United States)

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Non point source pollution modelling in the watershed managed by Integrated Conctructed Wetlands: A GIS approach.

    OpenAIRE

    Vyavahare, Nilesh

    2008-01-01

    The non-point source pollution has been recognised as main cause of eutrophication in Ireland (EPA Ireland, 2001). Integrated Constructed Wetland (ICW) is a management practice adopted in Annestown stream watershed, located in the south county of Waterford in Ireland, used to cleanse farmyard runoff. Present study forms the annual pollution budget for the Annestown stream watershed. The amount of pollution from non-point sources flowing into the stream was simulated by using GIS techniques; u...

  19. Epidemiology, public health, and health surveillance around point sources of pollution

    International Nuclear Information System (INIS)

    Stebbings, J.H. Jr.

    1981-01-01

    In industrial society a large number of point sources of pollution exist, such as chemical plants, smelters, and nuclear power plants. Public concern has forced the practising epidemiologist to undertake health surveillance of the usually small populations living around point sources. Although not justifiable as research, such epidemiologic surveillance activities are becoming a routine part of public health practice, and this trend will continue. This introduction reviews concepts of epidemiologic surveillance, and institutional problems relating to the quality of such applied research

  20. A multi points ultrasonic detection method for material flow of belt conveyor

    Science.gov (United States)

    Zhang, Li; He, Rongjun

    2018-03-01

    For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.

  1. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    Science.gov (United States)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  2. Detection of quantum critical points by a probe qubit.

    Science.gov (United States)

    Zhang, Jingfu; Peng, Xinhua; Rajendran, Nageswaran; Suter, Dieter

    2008-03-14

    Quantum phase transitions occur when the ground state of a quantum system undergoes a qualitative change when an external control parameter reaches a critical value. Here, we demonstrate a technique for studying quantum systems undergoing a phase transition by coupling the system to a probe qubit. It uses directly the increased sensibility of the quantum system to perturbations when it is close to a critical point. Using an NMR quantum simulator, we demonstrate this measurement technique for two different types of quantum phase transitions in an Ising spin chain.

  3. Characteristics of a multi-keV monochromatic point x-ray source

    Indian Academy of Sciences (India)

    Temporal, spatial and spectral characteristics of a multi-keV monochromatic point x-ray source based on vacuum diode with laser-produced plasma as cathode are presented. Electrons from a laser-produced aluminium plasma were accelerated towards a conical point tip titanium anode to generate K-shell x-ray radiation.

  4. Mapping correlation of a simulated dark matter source and a point source in the gamma-ray sky - Oral Presentation

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Alexander [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-23

    In my research, I analyzed how two gamma-ray source models interact with one another when optimizing to fit data. This is important because it becomes hard to distinguish between the two point sources when they are close together or looking at low energy photons. The reason for the first is obvious, the reason why they become harder to distinguish at lower photon energies is the resolving power of the Fermi Gamma-Ray Space Telescope gets worse at lower energies. When the two point sources are highly correlated (hard to distinguish between), we need to change our method of statistical analysis. What I did was show that highly correlated sources have larger uncertainties associated with them, caused by an optimizer not knowing which point source’s parameters to optimize. I also mapped out where their is high correlation for 2 different theoretical mass dark matter point sources so that people analyzing them in the future knew where they had to use more sophisticated statistical analysis.

  5. Miniature x-ray point source for alignment and calibration of x-ray optics

    International Nuclear Information System (INIS)

    Price, R.H.; Boyle, M.J.; Glaros, S.S.

    1977-01-01

    A miniature x-ray point source of high brightness similar to that of Rovinsky, et al. is described. One version of the x-ray source is used to align the x-ray optics on the Argus and Shiva laser systems. A second version is used to determine the spatial and spectral transmission functions of the x-ray optics. The spatial and spectral characteristics of the x-ray emission from the x-ray point source are described. The physical constraints including size, intensity and thermal limitations, and useful lifetime are discussed. The alignment and calibration techniques for various x-ray optics and detector combinations are described

  6. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    Science.gov (United States)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  7. Tagless and universal biosensor for point detection of pathogens

    Science.gov (United States)

    Markelz, Andrea G.; Knab, Joseph R.; Chen, Jing-Yin; Cerne, John; Cox, William A.

    2004-09-01

    We demonstrate the use of terahertz time domain spectroscopy for determination of ligand binding for biomolecules. Vibrational modes associated with tertiary structure conformational motions lay in the THz frequency range. We examine the THz dielectric response for hen egg white lysozyme (HEWL): free and bound with tri-N-acetyl-D-glucosamine. Transmission measurements on thin films show that there is a small change in the real part of the refractive index as a function of binding and a sizable decrease in the absorbance. A phenomenological model is used to determine the source of the absorbance change. A change in the vibrational mode density of states and net dipole moment changes will necessarily happen for all biomolecule-ligand binding, thus THz dielectric measurements may provide an universally applicable method to determine probe-target binding for biosensor applications.

  8. Guaranteed Unresolved Point Source Emission and the Gamma-ray Background

    International Nuclear Information System (INIS)

    Pavlidou, Vasiliki; Siegal-Gaskins, Jennifer M.; Brown, Carolyn; Fields, Brian D.; Olinto, Angela V.

    2007-01-01

    The large majority of EGRET point sources remain without an identified low-energy counterpart, and a large fraction of these sources are most likely extragalactic. Whatever the nature of the extragalactic EGRET unidentified sources, faint unresolved objects of the same class must have a contribution to the diffuse extragalactic gamma-ray background (EGRB). Understanding this component of the EGRB, along with other guaranteed contributions from known sources (blazars and normal galaxies), is essential if we are to use this emission to constrain exotic high-energy physics. Here, we follow an empirical approach to estimate whether the contribution of unresolved unidentified sources to the EGRB is likely to be important. Additionally, we discuss how upcoming GLAST observations of EGRET unidentified sources, their fainter counterparts, and the Galactic and extragalactic diffuse backgrounds, will shed light on the nature of the EGRET unidentified sources even without any positional association of such sources with low-energy counterparts

  9. Autonomous detection of ISO fade point with color laser printers

    Science.gov (United States)

    Yan, Ni; Maggard, Eric; Fothergill, Roberta; Jessome, Renee J.; Allebach, Jan P.

    2015-01-01

    Image quality assessment is a very important field in image processing. Human observation is slow and subjective, it also requires strict environment setup for the psychological test 1. Thus developing algorithms to match desired human experiments is always in need. Many studies have focused on detecting the fading phenomenon after the materials are printed, that is to monitor the persistence of the color ink 2-4. However, fading is also a common artifact produced by printing systems when the cartridges run low. We want to develop an automatic system to monitor cartridge life and report fading defects when they appear. In this paper, we first describe a psychological experiment that studies the human perspective on printed fading pages. Then we propose an algorithm based on Color Space Projection and K-means clustering to predict the visibility of fading defects. At last, we integrate the psychological experiment result with our algorithm to give a machine learning tool that monitors cartridge life.

  10. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  11. Tackling non-point source water pollution in British Columbia: An action plan

    Energy Technology Data Exchange (ETDEWEB)

    1998-01-01

    Efforts to protect British Columbia water quality by regulating point discharges from municipal and industrial sources have generally been successful, and it is recognized that the major remaining cause of water pollution in the province is from non-point sources. These sources are largely unregulated and associated with urbanization, agriculture, and other forms of land development. The first part of this report reviews the provincial commitment to clean water, the effects of non-point-source (NPS) pollution, and the management of NPS in the province. Part 2 describes the main causes of NPS in British Columbia: Land development, agriculture, stormwater runoff, on-site sewage systems, forestry and range activities, atmospheric deposition, and boating/marine activities. Finally, it presents key components of the province's NPS action plan: Education and training, prevention at site, land use planning and co-ordination, assessment and reporting, economic incentives, legislation and regulation, and implementation.

  12. Extending the search for neutrino point sources with IceCube above the horizon

    Energy Technology Data Exchange (ETDEWEB)

    IceCube Collaboration; Abbasi, R.

    2009-11-20

    Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This approach improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies.

  13. Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)

    DEFF Research Database (Denmark)

    Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan

    Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...

  14. Chemical point detection using differential fluorescence from molecularly imprinted polymers

    Science.gov (United States)

    Pestov, Dmitry; Anderson, John E.; Nelson, Jean; Tepper, Gary C.

    2004-12-01

    Fluorescence represents one of the most attractive approaches for chemical sensing due to the abundant light produced by most fluorophores, resulting in excellent detection sensitivity. However, the broad and overlapping emission spectra of target and background species have made it difficult to perform species identification in a field instrument because of the need to perform spectral decomposition and analysis. This paper describes a new chemical sensing strategy based on differential fluorescence measurements from molecularly imprinted polymers, which eliminates the need to perform any spectral analysis. Species identification is accomplished by measuring the differential light output from a pair of polymers-one imprinted to a target species and the other identical, but not imprinted. The imprinted polymer selectively concentrates the target molecule and controls the energy (wavelength) of the emitted fluorescence signal and the differential output eliminates common mode signals associated with non-specific background interference. Because no spectral analysis is required, the sensors can be made extremely small and require very little power. Preliminary performance parameters from a prototype sensor are presented and discussed.

  15. Agricultural non-point source pollution of glyphosate and AMPA at a catchment scale

    Science.gov (United States)

    Okada, Elena; Perez, Debora; De Geronimo, Eduardo; Aparicio, Virginia; Costa, Jose Luis

    2017-04-01

    Information on the actual input of pesticides into the environment is crucial for proper risk assessment and the design of risk reduction measures. The Crespo basin is found within the Balcarce County, located south-east of the Buenos Aires Province. The whole basin has an area of approximately 490 km2 and the river has a length of 65 km. This study focuses on the upper basin of the Crespo stream, covering an area of 226 km2 in which 94.7% of the land is under agricultural production representing a highly productive area, characteristic of the Austral Pampas region. In this study we evaluated the levels of glyphosate and its metabolite aminomethylphosphonic acid (AMPA) in soils; and the non-point source pollution of surface waters, stream sediments and groundwater, over a period of one year. Stream water samples were taken monthly using propylene bottles, from the center of the bridge. If present, sediment samples from the first 5 cm were collected using cylinder samplers. Groundwater samples were taken from windmills or electric pumps from different farms every two months. At the same time, composite soil samples (at 5 cm depth) were taken from an agricultural plot of each farm. Samples were analyzed for detection and quantification of glyphosate and AMPA using ultra-performance liquid chromatography coupled to a mass spectrometer (UPLC-MS/MS). The limit of detection (LD) in the soil samples was 0.5 μg Kg-1 and the limit of quantification (LQ) was 3 μg Kg-1, both for glyphosate and AMPA. In water samples the LD was 0.1 μg L-1 and the LQ was 0.5 μg L-1. The results showed that the herbicide dispersed into all the studied environmental compartments. Glyphosate and AMPA residues were detected in 34 and 54% of the stream water samples, respectively. Sediment samples had a higher detection frequency (>96%) than water samples, and there was no relationship between the presence in surface water with the detection in sediment samples. The presence in sediment samples

  16. Method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of gas

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, G.J.; Pritchard, F.R.

    1987-08-04

    This patent describes a method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of a gas. A gas sample is supplied to a dew-point detector and the temperature of a portion of the sample gas stream to be investigated is lowered progressively prior to detection until the dew-point is reached. The presence of condensate within the flowing gas is detected and subsequently the supply gas sample is heated to above the dew-point. The procedure of cooling and heating the gas stream continuously in a cyclical manner is repeated.

  17. OpenMC In Situ Source Convergence Detection

    Energy Technology Data Exchange (ETDEWEB)

    Aldrich, Garrett Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Univ. of California, Davis, CA (United States); Dutta, Soumya [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); The Ohio State Univ., Columbus, OH (United States); Woodring, Jonathan Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are able to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.

  18. Point-of-care Ultrasound Detection of Endophthalmitis

    Directory of Open Access Journals (Sweden)

    James Tucker

    2018-01-01

    Full Text Available History of present illness: A 59-year-old woman presented to the emergency department (ED with right eye pain. She had a history of cataract surgery in the right eye three months prior. The patient was seen at an outside ED eight days prior and reportedly had normal vision, normal eye pressures, with a large corneal ulcer and hypopyon in the anterior chamber. She was given subconjunctival injections of antibiotics and discharged with antibiotic drops. She was seen by a retina specialist the next day and had no evidence of endophthalmitis. On her second ED presentation, she had worsening right eye pain. Workup included normal intraocular pressures bilaterally and visual acuity with only light-perception in the affected eye. An ultrasound of her right eye was performed and is shown in figures 1 and 2. Significant findings: The patient’s ultrasound revealed an attached retina and a complex network of hyperechoic, mobile, membranous material in the posterior segment. Discussion: Endophthalmitis is a bacterial or fungal infection inside the vitreous and/or aqueous humors. The classic presentation is painful vision loss in a patient with recent ophthalmologic surgical intervention, an immunocompromised patient, or a septic patient. The specific bacteria or fungus causing the infection will vary depending on the reason for infection (post-surgical vs sepsis. Ultrasound findings typically include low amplitude mobile echoes, vitreous membranes, and thickening of the retina and choroid.1 Treatment for endophthalmitis includes direct, intraocular antibiotic injections by an ophthalmologist; hence, disposition for these patients would include admission for ophthalmology consultation. If there is a blood source of infection rather than a direct ocular inoculation, IV antibiotics should be initiated. Patients should also receive tetanus vaccination if tetanus status is outdated. In this case, the patient was diagnosed with endophthalmitis of the right eye

  19. DETECTION OF SLOPE MOVEMENT BY COMPARING POINT CLOUDS CREATED BY SFM SOFTWARE

    Directory of Open Access Journals (Sweden)

    K. Oda

    2016-06-01

    Full Text Available This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis. Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a

  20. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  1. Plagiarism and Source Deception Detection Based on Syntax Analysis

    Directory of Open Access Journals (Sweden)

    Eman Salih Al-Shamery

    2017-02-01

    Full Text Available In this research, the shingle algorithm with Jaccard method are employed as a new approach to detect deception in sources in addition to detect plagiarism . Source deception occurs as a result of taking a particular text from a source and relative it to another source, while plagiarism occurs in the documents as a result of taking part or all of the text belong to another research, this approach is based on Shingle algorithm with Jaccard coefficient , Shingling is an efficient way to compare the set of shingle in the files that contain text which are used as a feature to measure the syntactic similarity of the documents and it will work with Jaccard coefficient that measures similarity between sample sets . In this proposed system, text will be checked whether it contains syntax plagiarism or not and gives a percentage of similarity with other documents , As well as research sources will be checked to detect deception in source , by matching it with available sources from Turnitin report of the same research by using shingle algorithm with Jaccard coefficient. The motivations of this work is to discovery of literary thefts that occur on the researches , especially what students are doing in their researches , also discover the deception that occurs in the sources.

  2. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    Science.gov (United States)

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-01

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events. PMID:26805832

  3. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    Directory of Open Access Journals (Sweden)

    Gianfranco Manes

    2016-01-01

    Full Text Available This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  4. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure.

    Science.gov (United States)

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-20

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  5. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    Science.gov (United States)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  6. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  7. Spatiotemporal patterns of non-point source nitrogen loss in an agricultural catchment

    Directory of Open Access Journals (Sweden)

    Jian-feng Xu

    2016-04-01

    Full Text Available Non-point source nitrogen loss poses a risk to sustainable aquatic ecosystems. However, non-point sources, as well as impaired river segments with high nitrogen concentrations, are difficult to monitor and regulate because of their diffusive nature, budget constraints, and resource deficiencies. For the purpose of catchment management, the Bayesian maximum entropy approach and spatial regression models have been used to explore the spatiotemporal patterns of non-point source nitrogen loss. In this study, a total of 18 sampling sites were selected along the river network in the Hujiashan Catchment. Over the time period of 2008–2012, water samples were collected 116 times at each site and analyzed for non-point source nitrogen loss. The morphometric variables and soil drainage of different land cover types were studied and considered potential factors affecting nitrogen loss. The results revealed that, compared with the approach using the Euclidean distance, the Bayesian maximum entropy approach using the river distance led to an appreciable 10.1% reduction in the estimation error, and more than 53.3% and 44.7% of the river network in the dry and wet seasons, respectively, had a probability of non-point source nitrogen impairment. The proportion of the impaired river segments exhibited an overall decreasing trend in the study catchment from 2008 to 2012, and the reduction in the wet seasons was greater than that in the dry seasons. High nitrogen concentrations were primarily found in the downstream reaches and river segments close to the residential lands. Croplands and residential lands were the dominant factors affecting non-point source nitrogen loss, and explained up to 70.7% of total nitrogen in the dry seasons and 54.7% in the wet seasons. A thorough understanding of the location of impaired river segments and the dominant factors affecting total nitrogen concentration would have considerable importance for catchment management.

  8. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    Science.gov (United States)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-03-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey (SDSS) r-band images with artificial AGN point sources added which are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source PS is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover PS and host galaxy magnitudes with smaller systematic error and a lower average scatter (49%). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ±50% if it is trained on multiple PSF's. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN it is more robust and easy to use than parametric methods as it requires no input parameters.

  9. Applicability of a desiccant dew-point cooling system independent of external water sources

    DEFF Research Database (Denmark)

    Bellemo, Lorenzo; Elmegaard, Brian; Kærn, Martin Ryhl

    2015-01-01

    The applicability of a technical solution for making desiccant cooling systems independent of external water sources is investigated. Water is produced by condensing the desorbed water vapour in a closed regeneration circuit. Desorbed water recovery is applied to a desiccant dew-point cooling...... system, which includes a desiccant wheel and a dew point cooler. The system is simulated during the summer period in the Mediterranean climate of Rome and it results completely independent of external water sources. The seasonal thermal COP drops 8% in comparison to the open regeneration circuit solution...

  10. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    Science.gov (United States)

    Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.

    2002-01-01

    The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.

  11. Adaptive Ridge Point Refinement for Seeds Detection in X-Ray Coronary Angiogram

    Directory of Open Access Journals (Sweden)

    Ruoxiu Xiao

    2015-01-01

    Full Text Available Seed point is prerequired condition for tracking based method for extracting centerline or vascular structures from the angiogram. In this paper, a novel seed point detection method for coronary artery segmentation is proposed. Vessels on the image are first enhanced according to the distribution of Hessian eigenvalue in multiscale space; consequently, centerlines of tubular vessels are also enhanced. Ridge point is extracted as candidate seed point, which is then refined according to its mathematical definition. The theoretical feasibility of this method is also proven. Finally, all the detected ridge points are checked using a self-adaptive threshold to improve the robustness of results. Clinical angiograms are used to evaluate the performance of the proposed algorithm, and the results show that the proposed algorithm can detect a large set of true seed points located on most branches of vessels. Compared with traditional seed point detection algorithms, the proposed method can detect a larger number of seed points with higher precision. Considering that the proposed method can achieve accurate seed detection without any human interaction, it can be utilized for several clinical applications, such as vessel segmentation, centerline extraction, and topological identification.

  12. Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States

    Science.gov (United States)

    Puckett, Larry J.

    1994-01-01

    Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.

  13. The Unicellular State as a Point Source in a Quantum Biological System

    Directory of Open Access Journals (Sweden)

    John S. Torday

    2016-05-01

    Full Text Available A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins.

  14. Detection of supernova neutrinos at spallation neutron sources

    Science.gov (United States)

    Huang, Ming-Yang; Guo, Xin-Heng; Young, Bing-Lin

    2016-07-01

    After considering supernova shock effects, Mikheyev-Smirnov-Wolfenstein effects, neutrino collective effects, and Earth matter effects, the detection of supernova neutrinos at the China Spallation Neutron Source is studied and the expected numbers of different flavor supernova neutrinos observed through various reaction channels are calculated with the neutrino energy spectra described by the Fermi-Dirac distribution and the “beta fit” distribution respectively. Furthermore, the numerical calculation method of supernova neutrino detection on Earth is applied to some other spallation neutron sources, and the total expected numbers of supernova neutrinos observed through different reactions channels are given. Supported by National Natural Science Foundation of China (11205185, 11175020, 11275025, 11575023)

  15. Configuration of electro-optic fire source detection system

    Science.gov (United States)

    Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir

    2007-04-01

    The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.

  16. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  17. KM3NeT/ARCA sensitivity and discovery potential for neutrino point-like sources

    Directory of Open Access Journals (Sweden)

    Trovato A.

    2016-01-01

    Full Text Available KM3NeT is a large research infrastructure with a network of deep-sea neutrino telescopes in the abyss of the Mediterranean Sea. Of these, the KM3NeT/ARCA detector, installed in the KM3NeT-It node of the network, is optimised for studying high-energy neutrinos of cosmic origin. Sensitivities to galactic sources such as the supernova remnant RXJ1713.7-3946 and the pulsar wind nebula Vela X are presented as well as sensitivities to a generic point source with an E−2 spectrum which represents an approximation for the spectrum of extragalactic candidate neutrino sources.

  18. Verification of Minimum Detectable Activity for Radiological Threat Source Search

    Science.gov (United States)

    Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn

    2015-10-01

    The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.

  19. Systematic Review: Impact of point sources on antibiotic-resistant bacteria in the natural environment.

    Science.gov (United States)

    Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S

    2018-02-01

    Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.

  20. Nature of the Diffuse Source and Its Central Point-like Source in SNR 0509–67.5

    Energy Technology Data Exchange (ETDEWEB)

    Litke, Katrina C.; Chu, You-Hua; Holmes, Abigail; Santucci, Robert; Blindauer, Terrence; Gruendl, Robert A.; Ricker, Paul M. [Astronomy Department, University of Illinois, 1002 W. Green Street, Urbana, IL 61801 (United States); Li, Chuan-Jui [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan, R.O.C. (China); Pan, Kuo-Chuan [Departement Physik, Universität Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Weisz, Daniel R., E-mail: kclitke@email.arizona.edu [Department of Astronomy, University of California, 501 Cambell Hall #3411, Berkeley, CA 94720-3411 (United States)

    2017-03-10

    We examine a diffuse emission region near the center of SNR 0509−67.5 to determine its nature. Within this diffuse region we observe a point-like source that is bright in the near-IR, but is not visible in the B and V bands. We consider an emission line observed at 6766 Å and the possibilities that it is Ly α , H α , and [O ii] λ 3727. We examine the spectral energy distribution (SED) of the source, comprised of Hubble Space Telescope B , V , I , J , and H bands in addition to Spitzer /IRAC 3.6, 4.5, 5.8, and 8 μ m bands. The peak of the SED is consistent with a background galaxy at z ≈ 0.8 ± 0.2 and a possible Balmer jump places the galaxy at z ≈ 0.9 ± 0.3. These SED considerations support the emission line’s identification as [O ii] λ 3727. We conclude that the diffuse source in SNR 0509−67.5 is a background galaxy at z ≈ 0.82. Furthermore, we identify the point-like source superposed near the center of the galaxy as its central bulge. Finally, we find no evidence for a surviving companion star, indicating a double-degenerate origin for SNR 0509−67.5.

  1. Nature of the Diffuse Source and Its Central Point-like Source in SNR 0509–67.5

    International Nuclear Information System (INIS)

    Litke, Katrina C.; Chu, You-Hua; Holmes, Abigail; Santucci, Robert; Blindauer, Terrence; Gruendl, Robert A.; Ricker, Paul M.; Li, Chuan-Jui; Pan, Kuo-Chuan; Weisz, Daniel R.

    2017-01-01

    We examine a diffuse emission region near the center of SNR 0509−67.5 to determine its nature. Within this diffuse region we observe a point-like source that is bright in the near-IR, but is not visible in the B and V bands. We consider an emission line observed at 6766 Å and the possibilities that it is Ly α , H α , and [O ii] λ 3727. We examine the spectral energy distribution (SED) of the source, comprised of Hubble Space Telescope B , V , I , J , and H bands in addition to Spitzer /IRAC 3.6, 4.5, 5.8, and 8 μ m bands. The peak of the SED is consistent with a background galaxy at z ≈ 0.8 ± 0.2 and a possible Balmer jump places the galaxy at z ≈ 0.9 ± 0.3. These SED considerations support the emission line’s identification as [O ii] λ 3727. We conclude that the diffuse source in SNR 0509−67.5 is a background galaxy at z ≈ 0.82. Furthermore, we identify the point-like source superposed near the center of the galaxy as its central bulge. Finally, we find no evidence for a surviving companion star, indicating a double-degenerate origin for SNR 0509−67.5.

  2. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons

    International Nuclear Information System (INIS)

    Kwok, C.S.; Bialobzyski, P.J.; Yu, S.K.; Prestwich, W.V.

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204 Tl and 147 Pm in RM. When the point sources were at 12 mg/cm 2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0--12 mg/cm 2 from the interface were measured to be 1.08±0.03 (SE) and 1.03±0.03 (SE) for 204 Tl and 147 Pm, respectively. The Monte Carlo codes predicted 1.05±0.02 and 1.01±0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB--soft-tissue interface by as much as a factor of 2

  3. Identification and quantification of point sources of surface water contamination in fruit culture in the Netherlands

    NARCIS (Netherlands)

    Wenneker, M.; Beltman, W.H.J.; Werd, de H.A.E.; Zande, van de J.C.

    2008-01-01

    Measurements of pesticide concentrations in surface water by the water boards show that they have decreased less than was expected from model calculations. Possibly, the implementation of spray drift reducing techniques is overestimated in the model calculation. The impact of point sources is

  4. General Approach to the Evolution of Singlet Nanoparticles from a Rapidly Quenched Point Source

    NARCIS (Netherlands)

    Feng, J.; Huang, Luyi; Ludvigsson, Linus; Messing, Maria; Maiser, A.; Biskos, G.; Schmidt-Ott, A.

    2016-01-01

    Among the numerous point vapor sources, microsecond-pulsed spark ablation at atmospheric pressure is a versatile and environmentally friendly method for producing ultrapure inorganic nanoparticles ranging from singlets having sizes smaller than 1 nm to larger agglomerated structures. Due to its fast

  5. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  6. ''Anomalous'' air showers from point sources: Mass limits and light curves

    International Nuclear Information System (INIS)

    Domokos, G.; Elliott, B.; Kovesi-Domokos, S.

    1993-01-01

    We describe a method to obtain upper limits on the mass of the primaries of air showers associated with point sources. One also obtains the UHE pulse shape of a pulsar if its period is observed in the signal. As an example, we analyze the data obtained during a recent burst of Hercules-X1

  7. Estimation of Methane Emissions from Municipal Solid Waste Landfills in China Based on Point Emission Sources

    Directory of Open Access Journals (Sweden)

    Cai Bo-Feng

    2014-01-01

    Citation: Cai, B.-F., Liu, J.-G., Gao, Q.-X., et al., 2014. Estimation of methane emissions from municipal solid waste landfills in China based on point emission sources. Adv. Clim. Change Res. 5(2, doi: 10.3724/SP.J.1248.2014.081.

  8. Comparative Evaluation of Pulsewidth Modulation Strategies for Z-Source Neutral-Point-Clamped Inverter

    DEFF Research Database (Denmark)

    Loh, P.C.; Blaabjerg, Frede; Wong, C.P.

    2007-01-01

    modulation (PWM) strategies for controlling the Z-source NPC inverter. While developing the PWM techniques, attention has been devoted to carefully derive them from a common generic basis for improved portability, easier implementation, and most importantly, assisting readers in understanding all concepts......Z-source neutral-point-clamped (NPC) inverter has recently been proposed as an alternative three-level buck-boost power conversion solution with an improved output waveform quality. In principle, the designed Z-source inverter functions by selectively "shooting through" its power sources, coupled...... to the inverter using two unique Z-source impedance networks, to boost the inverter three-level output waveform. Proper modulation of the new inverter would therefore require careful integration of the selective shoot-through process to the basic switching concepts to achieve maximal voltage-boost, minimal...

  9. The Potential for Electrofuels Production in Sweden Utilizing Fossil and Biogenic CO2 Point Sources

    International Nuclear Information System (INIS)

    Hansson, Julia; Hackl, Roman; Taljegard, Maria; Brynolf, Selma; Grahn, Maria

    2017-01-01

    This paper maps, categorizes, and quantifies all major point sources of carbon dioxide (CO 2 ) emissions from industrial and combustion processes in Sweden. The paper also estimates the Swedish technical potential for electrofuels (power-to-gas/fuels) based on carbon capture and utilization. With our bottom-up approach using European databases, we find that Sweden emits approximately 50 million metric tons of CO 2 per year from different types of point sources, with 65% (or about 32 million tons) from biogenic sources. The major sources are the pulp and paper industry (46%), heat and power production (23%), and waste treatment and incineration (8%). Most of the CO 2 is emitted at low concentrations (<15%) from sources in the southern part of Sweden where power demand generally exceeds in-region supply. The potentially recoverable emissions from all the included point sources amount to 45 million tons. If all the recoverable CO 2 were used to produce electrofuels, the yield would correspond to 2–3 times the current Swedish demand for transportation fuels. The electricity required would correspond to about 3 times the current Swedish electricity supply. The current relatively few emission sources with high concentrations of CO 2 (>90%, biofuel operations) would yield electrofuels corresponding to approximately 2% of the current demand for transportation fuels (corresponding to 1.5–2 TWh/year). In a 2030 scenario with large-scale biofuels operations based on lignocellulosic feedstocks, the potential for electrofuels production from high-concentration sources increases to 8–11 TWh/year. Finally, renewable electricity and production costs, rather than CO 2 supply, limit the potential for production of electrofuels in Sweden.

  10. Evaluation of point-of-care tests for detecting microalbuminuria in ...

    African Journals Online (AJOL)

    Evaluation of point-of-care tests for detecting microalbuminuria in diabetic patients. ... creatinine (modified Jaffe) and albumin-to-creatinine ratio (ACR). Results: Linear regression analysis demonstrated a good correlation for the HemoCue® ...

  11. Calibration experiments of neutron source identification and detection in soil

    International Nuclear Information System (INIS)

    Gorin, N. V.; Lipilina, E. N.; Rukavishnikov, G. V.; Shmakov, D. V.; Ulyanov, A. I.

    2007-01-01

    In the course of detection of fissile materials in soil, series of calibration experiments were carried out on in laboratory conditions on an experimental installation, presenting a mock-up of an endless soil with various heterogeneous bodies in it, fissile material, measuring boreholes. A design of detecting device, methods of neutrons detection are described. Conditions of neutron background measuring are given. Soil density, humidity, chemical composition of soil was measured. Sensitivity of methods of fissile materials detection and identification in soil was estimated in the calibration experiments. Minimal detectable activity and the distance at which it can be detected were defined. Characteristics of neutron radiation in a borehole mock-up were measured; dependences of method sensitivities from water content in soil, source-detector distance and presence of heterogeneous bodies were examined. Possibility of direction detection to a fissile material as neutron source from a borehole using a collimator is shown. Identification of fissile material was carried out by measuring the gamma-spectrum. Mathematical modeling was carried out using the PRIZMA code (Developed in RFNC-VNIITF) and MCNP code (Developed in LANL). Good correlation of calculational and experimental values was shown. The methodic were shown to be applicable in the field conditions

  12. Tackling non-point source water pollution in British Columbia : an action plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    British Columbia`s approach to water quality management is discussed. The BC efforts include regulating `end of pipe` point discharges from industrial and municipal outfalls. The major remaining cause of water pollution is from non-point sources (NPS). NPS water pollution is caused by the release of pollutants from different and diffuse sources, mostly unregulated and associated with urbanization, agriculture and other forms of land development. The importance of dealing with such problems on an immediate basis to avoid a decline in water quality in the province is emphasized. Major sources of water pollution in British Columbia include: land development, agriculture, storm water runoff, onsite sewage systems, forestry, atmospheric deposition, and marine activities. 3 tabs.

  13. Detection of aeroacoustic sound sources on aircraft and wind turbines

    NARCIS (Netherlands)

    Oerlemans, Stefan

    2009-01-01

    This thesis deals with the detection of aeroacoustic sound sources on aircraft and wind turbines using phased microphone arrays. First, the reliability of the array technique is assessed using airframe noise measurements in open and closed wind tunnels. It is demonstrated that quantitative acoustic

  14. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  15. Geometric effects in alpha particle detection from distributed air sources

    International Nuclear Information System (INIS)

    Gil, L.R.; Leitao, R.M.S.; Marques, A.; Rivera, A.

    1994-08-01

    Geometric effects associated to detection of alpha particles from distributed air sources, as it happens in Radon and Thoron measurements, are revisited. The volume outside which no alpha particle may reach the entrance window of the detector is defined and determined analytically for rectangular and cylindrical symmetry geometries. (author). 3 figs

  16. Complex Event Detection via Multi Source Video Attributes (Open Access)

    Science.gov (United States)

    2013-10-03

    Complex Event Detection via Multi-Source Video Attributes Zhigang Ma† Yi Yang‡ Zhongwen Xu‡§ Shuicheng Yan Nicu Sebe† Alexander G. Hauptmann...under its International Research Centre @ Singapore Fund- ing Initiative and administered by the IDM Programme Of- fice, and the Intelligence Advanced

  17. Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection

    OpenAIRE

    Lau, Han Yih; Botella, Jose R.

    2017-01-01

    Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care di...

  18. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  19. Instream Biological Assessment of NPDES Point Source Discharges at the Savannah River Site, 2000

    International Nuclear Information System (INIS)

    Specht, W.L.

    2001-01-01

    The Savannah River Site (SRS) currently has 31 NPDES outfalls that have been permitted by the South Carolina Department of Health and Environmental Control (SCDHEC) to discharge to SRS streams and the Savannah River. In order to determine the cumulative impacts of these discharges to the receiving streams, a study plan was developed to perform in-stream assessments of the fish assemblages, macroinvertebrate assemblages, and habitats of the receiving streams. These studies were designed to detect biological impacts due to point source discharges. Sampling was initially conducted between November 1997 and July 1998 and was repeated in the summer and fall of 2000. A total of 18 locations were sampled (Table 1, Figure 1). Sampling locations for fish and macroinvertebrates were generally the same. However, different locations were sampled for fish (Road A-2) and macroinvertebrates (Road C) in the lower portion of Upper Three Runs, to avoid interference with ongoing fisheries studies at Road C. Also, fish were sampled in Fourmile Branch at Road 4 rather than at Road F because the stream at Road F was too narrow and shallow to support many fish. Sampling locations and parameters are detailed in Sections 2 and 3 of this report. In general, sampling locations were selected that would permit comparisons upstream and downstream of NPDES outfalls. In instances where this approach was not feasible because effluents discharge into the headwaters of a stream, appropriate unimpacted reference were used for comparison purposes. This report summarizes the results of the sampling that was conducted in 2000 and also compares these data to the data that were collected in 1997 and 1998

  20. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    Science.gov (United States)

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  1. Search for atmospheric muon-neutrinos and extraterrestric neutrino point sources in the 1997 AMANDA-B10 data

    International Nuclear Information System (INIS)

    Biron von Curland, A.

    2002-07-01

    The young field of high energy neutrino astronomy can be motivated by the search for the origin of the charged cosmic rays. Large astrophysical objects like AGNs or supernova remnants are candidates to accelerate hadrons which then can interact to eventually produce high energy neutrinos. Neutrino-induced muons can be detected via their emission of Cherenkov light in large neutrino telescopes like AMANDA. More than 10 9 atmospheric muon events and approximately 5000 atmospheric neutrino events were registered by AMANDA-B10 in 1997. Out of these, 223 atmospheric neutrino candidate events have been extracted. This data set contains approximately 15 background events. It allows to confirm the expected sensitivity of the detector towards neutrino events. A second set containing 369 (approximately 270 atmospheric neutrino events and 100 atmospheric muon events) was used to search for extraterrestrial neutrino point sources. Neither a binned search, nor a cluster search, nor a search for preselected sources gave indications for the existence of a strong neutrino point source. Based on this result, flux limits were derived. Assuming E ν -2 spectra, typical flux limits for selected sources of the order of Φ μ limit ∝ 10 -14 cm -2 s -1 for muons and Φ ν limit ∝ 10 -7 cm -2 s -1 for neutrinos have been obtained. (orig.)

  2. Passive Detection of Narrowband Sources Using a Sensor Array

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Candy, J V; Guidry, B L

    2007-10-24

    In this report we derive a model for a highly scattering medium, implemented as a set of MATLAB functions. This model is used to analyze an approach for using time-reversal to enhance the detection of a single frequency source in a highly scattering medium. The basic approach is to apply the singular value decomposition to the multistatic response matrix for a time-reversal array system. We then use the array in a purely passive mode, measuring the response to the presence of a source. The measured response is projected onto the singular vectors, creating a time-reversal pseudo-spectrum. We can then apply standard detection techniques to the pseudo-spectrum to determine the presence of a source. If the source is close to a particular scatterer in the medium, then we would expect an enhancement of the inner product between the array response to the source with the singular vector associated with that scatterer. In this note we begin by deriving the Foldy-Lax model of a highly scattering medium, calculate both the field emitted by the source and the multistatic response matrix of a time-reversal array system in the medium, then describe the initial analysis approach.

  3. A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL, Lisbon (Portugal); Aglietta, M. [INFN, Sezione di Torino, Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3, Paris (France); Albuquerque, I. F. M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET), San Carlos de Bariloche (Argentina); Almela, A. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica, Buenos Aires (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México, D. F., México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela, La Coruña (Spain); Anastasi, G. A. [Gran Sasso Science Institute (INFN), L’Aquila (Italy); and others

    2017-03-10

    Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p -values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.

  4. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Science.gov (United States)

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  6. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  7. Using Soluble Reactive Phosphorus and Ammonia to Identify Point Source Discharge from Large Livestock Facilities

    Science.gov (United States)

    Borrello, M. C.; Scribner, M.; Chessin, K.

    2013-12-01

    A growing body of research draws attention to the negative environmental impacts on surface water from large livestock facilities. These impacts are mostly in the form of excessive nutrient loading resulting in significantly decreased oxygen levels. Over-application of animal waste on fields as well as direct discharge into surface water from facilities themselves has been identified as the main contributor to the development of hypoxic zones in Lake Erie, Chesapeake Bay and the Gulf of Mexico. Some regulators claim enforcement of water quality laws is problematic because of the nature and pervasiveness of non-point source impacts. Any direct discharge by a facility is a violation of permits governed by the Clean Water Act, unless the facility has special dispensation for discharge. Previous research by the principal author and others has shown runoff and underdrain transport are the main mechanisms by which nutrients enter surface water. This study utilized previous work to determine if the effects of non-point source discharge can be distinguished from direct (point-source) discharge using simple nutrient analysis and dissolved oxygen (DO) parameters. Nutrient and DO parameters were measured from three sites: 1. A stream adjacent to a field receiving manure, upstream of a large livestock facility with a history of direct discharge, 2. The same stream downstream of the facility and 3. A stream in an area relatively unimpacted by large-scale agriculture (control site). Results show that calculating a simple Pearson correlation coefficient (r) of soluble reactive phosphorus (SRP) and ammonia over time as well as temperature and DO, distinguishes non-point source from point source discharge into surface water. The r value for SRP and ammonia for the upstream site was 0.01 while the r value for the downstream site was 0.92. The control site had an r value of 0.20. Likewise, r values were calculated on temperature and DO for each site. High negative correlations

  8. Detecting people of interest from internet data sources

    Science.gov (United States)

    Cardillo, Raymond A.; Salerno, John J.

    2006-04-01

    In previous papers, we have documented success in determining the key people of interest from a large corpus of real-world evidence. Our recent efforts focus on exploring additional domains and data sources. Internet data sources such as email, web pages, and news feeds make it easier to gather a large corpus of documents for various domains, but detecting people of interest in these sources introduces new challenges. Analyzing these massive sources magnifies entity resolution problems, and demands a storage management strategy that supports efficient algorithmic analysis and visualization techniques. This paper discusses the techniques we used in order to analyze the ENRON email repository, which are also applicable to analyzing web pages returned from our "Buddy" meta-search engine.

  9. Distribution majorization of corner points by reinforcement learning for moving object detection

    Science.gov (United States)

    Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang

    2018-04-01

    Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.

  10. Polarized point sources in the LOFAR Two-meter Sky Survey: A preliminary catalog

    Science.gov (United States)

    Van Eck, C. L.; Haverkorn, M.; Alves, M. I. R.; Beck, R.; Best, P.; Carretti, E.; Chyży, K. T.; Farnes, J. S.; Ferrière, K.; Hardcastle, M. J.; Heald, G.; Horellou, C.; Iacobelli, M.; Jelić, V.; Mulcahy, D. D.; O'Sullivan, S. P.; Polderman, I. M.; Reich, W.; Riseley, C. J.; Röttgering, H.; Schnitzeler, D. H. F. M.; Shimwell, T. W.; Vacca, V.; Vink, J.; White, G. J.

    2018-06-01

    The polarization properties of radio sources at very low frequencies (right ascension, 45°-57° declination, 570 square degrees). We have produced a catalog of 92 polarized radio sources at 150 MHz at 4.'3 resolution and 1 mJy rms sensitivity, which is the largest catalog of polarized sources at such low frequencies. We estimate a lower limit to the polarized source surface density at 150 MHz, with our resolution and sensitivity, of 1 source per 6.2 square degrees. We find that our Faraday depth measurements are in agreement with previous measurements and have significantly smaller errors. Most of our sources show significant depolarization compared to 1.4 GHz, but there is a small population of sources with low depolarization indicating that their polarized emission is highly localized in Faraday depth. We predict that an extension of this work to the full LOTSS data would detect at least 3400 polarized sources using the same methods, and probably considerably more with improved data processing.

  11. An overview of gravitational waves theory, sources and detection

    CERN Document Server

    Auger, Gerard

    2017-01-01

    This book describes detection techniques used to search for and analyze gravitational waves (GW). It covers the whole domain of GW science, starting from the theory and ending with the experimental techniques (both present and future) used to detect them. The theoretical sections of the book address the theory of general relativity and of GW, followed by the theory of GW detection. The various sources of GW are described as well as the methods used to analyse them and to extract their physical parameters. It includes an analysis of the consequences of GW observations in terms of astrophysics as well as a description of the different detectors that exist and that are planned for the future. With the recent announcement of GW detection and the first results from LISA Pathfinder, this book will allow non-specialists to understand the present status of the field and the future of gravitational wave science

  12. Detection and localization of change points in temporal networks with the aid of stochastic block models

    Science.gov (United States)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  13. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals

    Directory of Open Access Journals (Sweden)

    Nathan Gold

    2018-01-01

    Full Text Available Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  14. An international point source outbreak of typhoid fever: a European collaborative investigation*

    Science.gov (United States)

    Stanwell-Smith, R. E.; Ward, L. R.

    1986-01-01

    A point source outbreak of Salmonella typhi, degraded Vi-strain 22, affecting 32 British visitors to Kos, Greece, in 1983 was attributed by a case—control study to the consumption of a salad at one hotel. This represents the first major outbreak of typhoid fever in which a salad has been identified as the vehicle. The source of the infection was probably a carrier in the hotel staff. The investigation demonstrates the importance of national surveillance, international cooperation, and epidemiological methods in the investigation and control of major outbreaks of infection. PMID:3488842

  15. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    Science.gov (United States)

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable

  16. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    Science.gov (United States)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  17. A proton point source produced by laser interaction with cone-top-end target

    International Nuclear Information System (INIS)

    Yu, Jinqing; Jin, Xiaolin; Zhou, Weimin; Zhao, Zongqing; Yan, Yonghong; Li, Bin; Hong, Wei; Gu, Yuqiu

    2012-01-01

    In this paper, we propose a proton point source by the interaction of laser and cone-top-end target and investigate it by two-dimensional particle-in-cell (2D-PIC) simulations as the proton point sources are well known for higher spatial resolution of proton radiography. Our results show that the relativistic electrons are guided to the rear of the cone-top-end target by the electrostatic charge-separation field and self-generated magnetic field along the profile of the target. As a result, the peak magnitude of sheath field at the rear surface of cone-top-end target is higher compared to common cone target. We test this scheme by 2D-PIC simulation and find the result has a diameter of 0.79λ 0 , an average energy of 9.1 MeV and energy spread less than 35%.

  18. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  19. Search for neutrino point sources with an all-sky autocorrelation analysis in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Turcati, Andrea; Bernhard, Anna; Coenders, Stefan [TU, Munich (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube Neutrino Observatory is a cubic kilometre scale neutrino telescope located in the Antarctic ice. Its full-sky field of view gives unique opportunities to study the neutrino emission from the Galactic and extragalactic sky. Recently, IceCube found the first signal of astrophysical neutrinos with energies up to the PeV scale, but the origin of these particles still remains unresolved. Given the observed flux, the absence of observations of bright point-sources is explainable with the presence of numerous weak sources. This scenario can be tested using autocorrelation methods. We present here the sensitivities and discovery potentials of a two-point angular correlation analysis performed on seven years of IceCube data, taken between 2008 and 2015. The test is applied on the northern and southern skies separately, using the neutrino energy information to improve the effectiveness of the method.

  20. Individual tree detection based on densities of high points of high resolution airborne lidar

    NARCIS (Netherlands)

    Abd Rahman, M.Z.; Gorte, B.G.H.

    2008-01-01

    The retrieval of individual tree location from Airborne LiDAR has focused largely on utilizing canopy height. However, high resolution Airborne LiDAR offers another source of information for tree detection. This paper presents a new method for tree detection based on high points’ densities from a

  1. Prevention and Control of Agricultural Non-Point Source Pollutions in UK and Suggestions to China

    OpenAIRE

    Liu, Kun; Ren, Tianzhi; Wu, Wenliang; Meng, Fanquiao; Bellarby, Jessica; Smith, Laurence

    2016-01-01

    Currently, the world is facing challenges of maintaining food production growth while improving agricultural ecological environmental quality. The prevention and control of agricultural non-point source pollution, a key component of these challenges, is a systematic program which integrates many factors such as technology and its extension, relevant regulation and policies. In the project of UK-China Sustainable Agriculture Innovation Network, we undertook a comprehensive analysis of the prev...

  2. High angle grain boundaries as sources or sinks for point defects

    Energy Technology Data Exchange (ETDEWEB)

    Balluffi, R.W.

    1979-09-01

    A secondary grain boundary dislocation climb model for high angle grain boundaries as sources/sinks for point defects is described in the light of recent advances in our knowledge of grain boundary structure. Experimental results are reviewed and are then compared with the expected behavior of the proposed model. Reasonably good consistency is found at the level of our present understanding of the subject. However, several gaps in our present knowledge still exist, and these are identified and discussed briefly.

  3. Gamma Rays from the Inner Milky Way: Dark Matter or Point Sources?

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Studies of data from the Fermi Gamma-Ray Space Telescope have revealed bright gamma-ray emission from the central regions of our galaxy, with a spatial and spectral profile consistent with annihilating dark matter. I will present a new model-independent analysis that suggests that rather than originating from dark matter, the GeV excess may arise from a surprising new population of as-yet-unresolved gamma-ray point sources in the heart of the Milky Way.

  4. CO2 point sources and subsurface storage capacities for CO2 in aquifers in Norway

    International Nuclear Information System (INIS)

    Boee, Reidulv; Magnus, Christian; Osmundsen, Per Terje; Rindstad, Bjoern Ivar

    2002-01-01

    The GESTCO project comprises a study of the distribution and coincidence of thermal CO 2 emission sources and location/quality of geological storage capacity in Europe. Four of the most promising types of geological storage are being studied. 1. Onshore/offshore saline aquifers with or without lateral seal. 2. Low entalpy geothermal reservoirs. 3. Deep methane-bearing coal beds and abandoned coal and salt mines. 4. Exhausted or near exhausted hydrocarbon structures. In this report we present an inventory of CO 2 point sources in Norway (1999) and the results of the work within Study Area C: Deep saline aquifers offshore/near shore Northern and Central Norway. Also offshore/near shore Southern Norway has been included while the Barents Sea is not described in any detail. The most detailed studies are on the Tilje and Aare Formations on the Troendelag Platform off Mid-Norway and on the Sognefjord, Fensfjord and Krossfjord Formations, southeast of the Troll Field off Western Norway. The Tilje Formation has been chosen as one of the cases to be studied in greater detail (numerical modelling) in the project. This report shows that offshore Norway, there are concentrations of large CO 2 point sources in the Haltenbanken, the Viking Graben/Tampen Spur area, the Southern Viking Graben and the central Trough, while onshore Norway there are concentrations of point sources in the Oslofjord/Porsgrund area, along the coast of western Norway and in the Troendelag. A number of aquifers with large theoretical CO 2 storage potential are pointed out in the North Sea, the Norwegian Sea and in the Southern Barents Sea. The storage capacity in the depth interval 0.8 - 4 km below sea level is estimated to be ca. 13 Gt (13000000000 tonnes) CO 2 in geological traps (outside hydrocarbon fields), while the storage capacity in aquifers not confined to traps is estimated to be at least 280 Gt CO 2 . (Author)

  5. Fast and Accurate Rat Head Motion Tracking With Point Sources for Awake Brain PET.

    Science.gov (United States)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-07-01

    To avoid the confounding effects of anesthesia and immobilization stress in rat brain positron emission tomography (PET), motion tracking-based unrestrained awake rat brain imaging is being developed. In this paper, we propose a fast and accurate rat headmotion tracking method based on small PET point sources. PET point sources (3-4) attached to the rat's head are tracked in image space using 15-32-ms time frames. Our point source tracking (PST) method was validated using a manually moved microDerenzo phantom that was simultaneously tracked with an optical tracker (OT) for comparison. The PST method was further validated in three awake [ 18 F]FDG rat brain scans. Compared with the OT, the PST-based correction at the same frame rate (31.2 Hz) reduced the reconstructed FWHM by 0.39-0.66 mm for the different tested rod sizes of the microDerenzo phantom. The FWHM could be further reduced by another 0.07-0.13 mm when increasing the PST frame rate (66.7 Hz). Regional brain [ 18 F]FDG uptake in the motion corrected scan was strongly correlated ( ) with that of the anesthetized reference scan for all three cases ( ). The proposed PST method allowed excellent and reproducible motion correction in awake in vivo experiments. In addition, there is no need of specialized tracking equipment or additional calibrations to be performed, the point sources are practically imperceptible to the rat, and PST is ideally suitable for small bore scanners, where optical tracking might be challenging.

  6. Temperature Effects of Point Sources, Riparian Shading, and Dam Operations on the Willamette River, Oregon

    Science.gov (United States)

    Rounds, Stewart A.

    2007-01-01

    Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate

  7. Non-point Source Pollutants Loss of Planting Industry in the Yunnan Plateau Lake Basin, China

    Directory of Open Access Journals (Sweden)

    ZHAO Zu-jun

    2017-12-01

    Full Text Available Non-point source pollution of planting has become a major factor affecting the quality and safety of water environment in our country. In recent years, some studies show that the loss of nitrogen and phosphorus in agricultural chemical fertilizers has led to more serious non-point source pollution. By means of the loss coefficient method and spatial overlay analysis, the loss amount, loss of strength and its spatial distribution characteristics of total nitrogen, total phosphorus, ammonium nitrogen and nitrate nitrogen were analyzed in the Fuxian Lake, Xingyun Lake and Qilu Lake Basin in 2015. The results showed that:The loss of total nitrogen was the highest in the three basins, following by ammonium nitrogen, nitrate nitrogen and total phosphorus, which the loss of intensity range were 2.73~22.07, 0.003~3.52, 0.01~2.25 kg·hm-2 and 0.05~1.36 kg·hm-2, respectively. Total nitrogen and total phosphorus loss were mainly concentrated in the southwest of Qilu Lake, west and south of Xingyun Lake. Ammonium nitrogen and nitrate nitrogen loss mainly concentrated in the south of Qilu Lake, south and north of Xingyun Lake. The loss of nitrogen and phosphorus was mainly derived from cash crops and rice. Therefore, zoning, grading and phased prevention and control schemes were proposed, in order to provide scientific basis for controlling non-point source pollution in the study area.

  8. Normalized Point Source Sensitivity for Off-Axis Optical Performance Evaluation of the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George

    2010-01-01

    The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.

  9. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    Directory of Open Access Journals (Sweden)

    M. Wang

    2015-05-01

    Full Text Available The conservation of drinking water source reservoirs has a close relationship between regional economic development and people’s livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN and total phosphorus (TP. The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  10. A scanning point source for quality control of FOV uniformity in GC-PET imaging

    International Nuclear Information System (INIS)

    Bergmann, H.; Minear, G.; Dobrozemsky, G.; Nowotny, R.; Koenig, B.

    2002-01-01

    Aim: PET imaging with coincidence cameras (GC-PET) requires additional quality control procedures to check the function of coincidence circuitry and detector zoning. In particular, the uniformity response over the field of view needs special attention since it is known that coincidence counting mode may suffer from non-uniformity effects not present in single photon mode. Materials and methods: An inexpensive linear scanner with a stepper motor and a digital interface to a PC with software allowing versatile scanning modes was developed. The scanner is used with a source holder containing a Sodium-22 point source. While moving the source along the axis of rotation of the GC-PET system, a tomographic acquisition takes place. The scan covers the full axial field of view of the 2-D or 3-D scatter frame. Depending on the acquisition software, point source scanning takes place continuously while only one projection is acquired or is done in step-and-shoot mode with the number of positions equal to the number of gantry steps. Special software was developed to analyse the resulting list mode acquisition files and to produce an image of the recorded coincidence events of each head. Results: Uniformity images of coincidence events were obtained after further correction for systematic sensitivity variations caused by acquisition geometry. The resulting images are analysed visually and by calculating NEMA uniformity indices as for a planar flood field. The method has been applied successfully to two different brands of GC-PET capable gamma cameras. Conclusion: Uniformity of GC-PET can be tested quickly and accurately with a routine QC procedure, using a Sodium-22 scanning point source and an inexpensive mechanical scanning device. The method can be used for both 2-D and 3-D acquisition modes and fills an important gap in the quality control system for GC-PET

  11. Source apportionment of nitrogen and phosphorus from non-point source pollution in Nansi Lake Basin, China.

    Science.gov (United States)

    Zhang, Bao-Lei; Cui, Bo-Hao; Zhang, Shu-Min; Wu, Quan-Yuan; Yao, Lei

    2018-05-03

    Nitrogen (N) and phosphorus (P) from non-point source (NPS) pollution in Nansi Lake Basin greatly influenced the water quality of Nansi Lake, which is the determinant factor for the success of East Route of South-North Water Transfer Project in China. This research improved Johnes export coefficient model (ECM) by developing a method to determine the export coefficients of different land use types based on the hydrological and water quality data. Taking NPS total nitrogen (TN) and total phosphorus (TP) as the study objects, this study estimated the contributions of different pollution sources and analyzed their spatial distributions based on the improved ECM. The results underlined that the method for obtaining output coefficients of land use types using hydrology and water quality data is feasible and accurate, and is suitable for the study of NPS pollution at large-scale basins. The average output structure of NPS TN from land use, rural breeding and rural life is 33.6, 25.9, and 40.5%, and the NPS TP is 31.6, 43.7, and 24.7%, respectively. Especially, dry land was the main land use source for both NPS TN and TP pollution, with the contributed proportions of 81.3 and 81.8% respectively. The counties of Zaozhuang, Tengzhou, Caoxian, Yuncheng, and Shanxian had higher contribution rates and the counties of Dingtao, Juancheng, and Caoxian had the higher load intensities for both NPS TN and TP pollution. The results of this study allowed for an improvement in the understanding of the pollution source contribution and enabled researchers and planners to focus on the most important sources and regions of NPS pollution.

  12. IceCube point source searches using through-going muon tracks

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan [TU Muenchen, Physik-Department, Excellence Cluster Universe, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    The IceCube neutrino observatory located at the South Pole is the current largest neutrino telescope. Using through-going muon tracks, IceCube records approximately 130,000 events per year with reconstruction accuracy as low as 0.7 deg for energies of 10 TeV. Having analysed an integrated time-scale of 4 years, no sources of neutrinos have yet been observed. This talk deals with the current progress in point-source searches, adding another two years of data recorded in the years 2012 and 2013. In a combined search with starting events, sources of hard and soft spectra with- and with-out cut-offs are characterised.

  13. Few molecule SERS detection using nanolens based plasmonic nanostructure: application to point mutation detection

    KAUST Repository

    Das, Gobind

    2016-10-27

    Advancements in nanotechnology fabrication techniques allow the possibility to design and fabricate a device with a minimum gap (<10 nm) between the composing nanostructures in order to obtain better control over the creation and spatial definition of plasmonic hot-spots. The present study is intended to show the fabrication of nanolens and their application to single/few molecules detection. Theoretical simulations were performed on different designs of real structures, including comparison of rough and smooth surfaces. Various molecules (rhodamine 6G, benzenethiol and BRCA1/BRCT peptides) were examined in this regard. Single molecule detection was possible for synthetic peptides, with a possible application in early detection of diseases. © The Royal Society of Chemistry.

  14. Few molecule SERS detection using nanolens based plasmonic nanostructure: application to point mutation detection

    KAUST Repository

    Das, Gobind; Alrasheed, Salma; Coluccio, Maria Laura; Gentile, Francesco; Nicastri, Annalisa; Candeloro, Patrizio; Cuda, Giovanni; Perozziello, Gerardo; Di Fabrizio, Enzo M.

    2016-01-01

    Advancements in nanotechnology fabrication techniques allow the possibility to design and fabricate a device with a minimum gap (<10 nm) between the composing nanostructures in order to obtain better control over the creation and spatial definition of plasmonic hot-spots. The present study is intended to show the fabrication of nanolens and their application to single/few molecules detection. Theoretical simulations were performed on different designs of real structures, including comparison of rough and smooth surfaces. Various molecules (rhodamine 6G, benzenethiol and BRCA1/BRCT peptides) were examined in this regard. Single molecule detection was possible for synthetic peptides, with a possible application in early detection of diseases. © The Royal Society of Chemistry.

  15. Accelerating fissile material detection with a neutron source

    Science.gov (United States)

    Rowland, Mark S.; Snyderman, Neal J.

    2018-01-30

    A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly to count neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a Poisson neutron generator for in-beam interrogation of a possible fissile neutron source and a DC power supply that exhibits electrical ripple on the order of less than one part per million. Certain voltage multiplier circuits, such as Cockroft-Walton voltage multipliers, are used to enhance the effective of series resistor-inductor circuits components to reduce the ripple associated with traditional AC rectified, high voltage DC power supplies.

  16. VizieR Online Data Catalog: ChaMP X-ray point source catalog (Kim+, 2007)

    Science.gov (United States)

    Kim, M.; Kim, D.-W.; Wilkes, B. J.; Green, P. J.; Kim, E.; Anderson, C. S.; Barkhouse, W. A.; Evans, N. R.; Ivezic, Z.; Karovska, M.; Kashyap, V. L.; Lee, M. G.; Maksym, P.; Mossman, A. E.; Silverman, J. D.; Tananbaum, H. D.

    2009-01-01

    We present the Chandra Multiwavelength Project (ChaMP) X-ray point source catalog with ~6800 X-ray sources detected in 149 Chandra observations covering ~10deg2. The full ChaMP catalog sample is 7 times larger than the initial published ChaMP catalog. The exposure time of the fields in our sample ranges from 0.9 to 124ks, corresponding to a deepest X-ray flux limit of f0.5-8.0=9x10-16ergs/cm2/s. The ChaMP X-ray data have been uniformly reduced and analyzed with ChaMP-specific pipelines and then carefully validated by visual inspection. The ChaMP catalog includes X-ray photometric data in eight different energy bands as well as X-ray spectral hardness ratios and colors. To best utilize the ChaMP catalog, we also present the source reliability, detection probability, and positional uncertainty. (10 data files).

  17. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  18. Galactic Sources Detected in the NuSTAR Serendipitous Survey

    Energy Technology Data Exchange (ETDEWEB)

    Tomsick, John A.; Clavel, Maïca; Chiu, Jeng-Lun [Space Sciences Laboratory, 7 Gauss Way, University of California, Berkeley, CA 94720-7450 (United States); Lansbury, George B.; Aird, James [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Rahoui, Farid [Department of Astronomy, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States); Fornasini, Francesca M.; Hong, JaeSub; Grindlay, Jonathan E. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Alexander, David M. [Centre for Extragalactic Astronomy, Department of Physics, University of Durham, South Road, Durham DH1 3LE (United Kingdom); Bodaghee, Arash [Georgia College and State University, Milledgeville, GA 31061 (United States); Hailey, Charles J.; Mori, Kaya [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Harrison, Fiona A. [California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Krivonos, Roman A. [Space Research Institute of the Russian Academy of Sciences, Profsoyuznaya Str. 84/32, 117997, Moscow (Russian Federation); Stern, Daniel [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States)

    2017-06-01

    The Nuclear Spectroscopic Telescope Array (NuSTAR) provides an improvement in sensitivity at energies above 10 keV by two orders of magnitude over non-focusing satellites, making it possible to probe deeper into the Galaxy and universe. Lansbury and collaborators recently completed a catalog of 497 sources serendipitously detected in the 3–24 keV band using 13 deg{sup 2} of NuSTAR coverage. Here, we report on an optical and X-ray study of 16 Galactic sources in the catalog. We identify 8 of them as stars (but some or all could have binary companions), and use information from Gaia to report distances and X-ray luminosities for 3 of them. There are 4 CVs or CV candidates, and we argue that NuSTAR J233426–2343.9 is a relatively strong CV candidate based partly on an X-ray spectrum from XMM-Newton . NuSTAR J092418–3142.2, which is the brightest serendipitous source in the Lansbury catalog, and NuSTAR J073959–3147.8 are low-mass X-ray binary candidates, but it is also possible that these 2 sources are CVs. One of the sources is a known high-mass X-ray binary (HMXB), and NuSTAR J105008–5958.8 is a new HMXB candidate that has strong Balmer emission lines in its optical spectrum and a hard X-ray spectrum. We discuss the implications of finding these HMXBs for the surface density (log N –log S ) and luminosity function of Galactic HMXBs. We conclude that with the large fraction of unclassified sources in the Galactic plane detected by NuSTAR in the 8–24 keV band, there could be a significant population of low-luminosity HMXBs.

  19. Diffusion of dust particles from a point-source above ground level

    International Nuclear Information System (INIS)

    Hassan, M.H.A.; Eltayeb, I.A.

    1998-10-01

    A pollutant of small particles is emitted by a point source at a height h above ground level in an atmosphere in which a uni-directional wind speed, U, is prevailing. The pollutant is subjected to diffusion in all directions in the presence of advection and settling due to gravity. The equation governing the concentration of the pollutant is studied with the wind speed and the different components of diffusion tensor are proportional to the distance above ground level and the source has a uniform strength. Adopting a Cartesian system of coordinates in which the x-axis lies along the direction of the wind velocity, the z-axis is vertically upwards and the y-axis completes the right-hand triad, the solution for the concentration c(x,y,z) is obtained in closed form. The relative importance of the components of diffusion along the three axes is discussed. It is found that for any plane y=constant (=A), c(x,y,z) is concentrated along a curve of ''extensive pollution''. In the plane A=0, the concentration decreases along the line of extensive pollution as we move away from the source. However, for planes A≅0, the line of extensive pollution possesses a point of accumulation, which lies at a nonzero value of x. As we move away from the plane A=0, the point of accumulation moves laterally away from the plane x=0 and towards the plane z=0. The presence of the point of accumulation is entirely due to the presence of lateral diffusion. (author)

  20. Vanishing points detection using combination of fast Hough transform and deep learning

    Science.gov (United States)

    Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry

    2018-04-01

    In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.

  1. Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications

    Directory of Open Access Journals (Sweden)

    Rafael Pérez-Torres

    2016-10-01

    Full Text Available The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone’s battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution.

  2. Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications.

    Science.gov (United States)

    Pérez-Torres, Rafael; Torres-Huitzil, César; Galeana-Zapién, Hiram

    2016-10-13

    The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone's battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution.

  3. Dew inspired breathing-based detection of genetic point mutation visualized by naked eye

    Science.gov (United States)

    Xie, Liping; Wang, Tongzhou; Huang, Tianqi; Hou, Wei; Huang, Guoliang; Du, Yanan

    2014-09-01

    A novel label-free method based on breathing-induced vapor condensation was developed for detection of genetic point mutation. The dew-inspired detection was realized by integration of target-induced DNA ligation with rolling circle amplification (RCA). The vapor condensation induced by breathing transduced the RCA-amplified variances in DNA contents into visible contrast. The image could be recorded by a cell phone for further or even remote analysis. This green assay offers a naked-eye-reading method potentially applied for point-of-care liver cancer diagnosis in resource-limited regions.

  4. Detection of gaseous heavy water leakage points in CANDU 6 pressurized heavy water reactors

    International Nuclear Information System (INIS)

    Park, T-K.; Jung, S-H.

    1996-01-01

    During reactor operation, the heavy water filled primary coolant system in a CANDU 6 Pressurized Heavy Water (PHWR) may leak through routine operations of the plant via components, mechanical joints, and during inadvertent operations etc. Early detection of leak points is therefore important to maintain plant safety and economy. There are many independent systems to monitor and recover heavy water leakage in a CANDU 6 PHWR. Methodology for early detection based on operating experience from these systems, is investigated in this paper. In addition, the four symptoms of D 2 O leakage, the associated process for clarifying and verifying the leakage, and the probable points of leakage are discussed. (author)

  5. Fault Detection and Diagnosis of Railway Point Machines by Sound Analysis

    Science.gov (United States)

    Lee, Jonguk; Choi, Heesu; Park, Daihee; Chung, Yongwha; Kim, Hee-Young; Yoon, Sukhan

    2016-01-01

    Railway point devices act as actuators that provide different routes to trains by driving switchblades from the current position to the opposite one. Point failure can significantly affect railway operations, with potentially disastrous consequences. Therefore, early detection of anomalies is critical for monitoring and managing the condition of rail infrastructure. We present a data mining solution that utilizes audio data to efficiently detect and diagnose faults in railway condition monitoring systems. The system enables extracting mel-frequency cepstrum coefficients (MFCCs) from audio data with reduced feature dimensions using attribute subset selection, and employs support vector machines (SVMs) for early detection and classification of anomalies. Experimental results show that the system enables cost-effective detection and diagnosis of faults using a cheap microphone, with accuracy exceeding 94.1% whether used alone or in combination with other known methods. PMID:27092509

  6. Vehicle parts detection based on Faster - RCNN with location constraints of vehicle parts feature point

    Science.gov (United States)

    Yang, Liqin; Sang, Nong; Gao, Changxin

    2018-03-01

    Vehicle parts detection plays an important role in public transportation safety and mobility. The detection of vehicle parts is to detect the position of each vehicle part. We propose a new approach by combining Faster RCNN and three level cascaded convolutional neural network (DCNN). The output of Faster RCNN is a series of bounding boxes with coordinate information, from which we can locate vehicle parts. DCNN can precisely predict feature point position, which is the center of vehicle part. We design an output strategy by combining these two results. There are two advantages for this. The quality of the bounding boxes are greatly improved, which means vehicle parts feature point position can be located more precise. Meanwhile we preserve the position relationship between vehicle parts and effectively improve the validity and reliability of the result. By using our algorithm, the performance of the vehicle parts detection improve obviously compared with Faster RCNN.

  7. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES IN NEARBY GALAXIES. II. X-RAY LUMINOSITY FUNCTIONS AND ULTRALUMINOUS X-RAY SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Song; Qiu, Yanli; Liu, Jifeng [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Bregman, Joel N., E-mail: songw@bao.ac.cn, E-mail: jfliu@bao.ac.cn [University of Michigan, Ann Arbor, MI 48109 (United States)

    2016-09-20

    Based on the recently completed Chandra /ACIS survey of X-ray point sources in nearby galaxies, we study the X-ray luminosity functions (XLFs) for X-ray point sources in different types of galaxies and the statistical properties of ultraluminous X-ray sources (ULXs). Uniform procedures are developed to compute the detection threshold, to estimate the foreground/background contamination, and to calculate the XLFs for individual galaxies and groups of galaxies, resulting in an XLF library of 343 galaxies of different types. With the large number of surveyed galaxies, we have studied the XLFs and ULX properties across different host galaxy types, and confirm with good statistics that the XLF slope flattens from lenticular ( α ∼ 1.50 ± 0.07) to elliptical (∼1.21 ± 0.02), to spirals (∼0.80 ± 0.02), to peculiars (∼0.55 ± 0.30), and to irregulars (∼0.26 ± 0.10). The XLF break dividing the neutron star and black hole binaries is also confirmed, albeit at quite different break luminosities for different types of galaxies. A radial dependency is found for ellipticals, with a flatter XLF slope for sources located between D {sub 25} and 2 D {sub 25}, suggesting the XLF slopes in the outer region of early-type galaxies are dominated by low-mass X-ray binaries in globular clusters. This study shows that the ULX rate in early-type galaxies is 0.24 ± 0.05 ULXs per surveyed galaxy, on a 5 σ confidence level. The XLF for ULXs in late-type galaxies extends smoothly until it drops abruptly around 4 × 10{sup 40} erg s{sup −1}, and this break may suggest a mild boundary between the stellar black hole population possibly including 30 M {sub ⊙} black holes with super-Eddington radiation and intermediate mass black holes.

  8. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  9. Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.

    Science.gov (United States)

    Liu, Siqi; Wright, Adam; Hauskrecht, Milos

    2017-06-01

    A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.

  10. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    Science.gov (United States)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  11. Mercury exposure in terrestrial birds far downstream of an historical point source

    International Nuclear Information System (INIS)

    Jackson, Allyson K.; Evers, David C.; Folsom, Sarah B.; Condon, Anne M.; Diener, John; Goodrick, Lizzie F.; McGann, Andrew J.; Schmerfeld, John; Cristol, Daniel A.

    2011-01-01

    Mercury (Hg) is a persistent environmental contaminant found in many freshwater and marine ecosystems. Historical Hg contamination in rivers can impact the surrounding terrestrial ecosystem, but there is little known about how far downstream this contamination persists. In 2009, we sampled terrestrial forest songbirds at five floodplain sites up to 137 km downstream of an historical source of Hg along the South and South Fork Shenandoah Rivers (Virginia, USA). We found that blood total Hg concentrations remained elevated over the entire sampling area and there was little evidence of decline with distance. While it is well known that Hg is a pervasive and long-lasting aquatic contaminant, it has only been recently recognized that it also biomagnifies effectively in floodplain forest food webs. This study extends the area of concern for terrestrial habitats near contaminated rivers for more than 100 km downstream from a waterborne Hg point source. - Highlights: → We report blood mercury levels for terrestrial songbirds downstream of contamination. → Blood mercury levels remain elevated above reference for at least 137 km downstream. → Trends vary based on foraging guild and migration strategy. → Mercury affects terrestrial biota farther downstream than previously documented. - Blood mercury levels of forest songbirds remain elevated above reference levels for at least 137 km downstream of historical point source.

  12. Mercury exposure in terrestrial birds far downstream of an historical point source

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, Allyson K., E-mail: allyson.jackson@briloon.org [Biodiversity Research Institute, 19 Flaggy Meadow Road, Gorham, ME 04038 (United States); Institute for Integrative Bird Behavior Studies, Department of Biology, College of William and Mary, PO Box 8795, Williamsburg, VA 23187 (United States); Evers, David C.; Folsom, Sarah B. [Biodiversity Research Institute, 19 Flaggy Meadow Road, Gorham, ME 04038 (United States); Condon, Anne M. [U.S. Fish and Wildlife Service, 6669 Short Lane, Gloucester, VA 23061 (United States); Diener, John; Goodrick, Lizzie F. [Biodiversity Research Institute, 19 Flaggy Meadow Road, Gorham, ME 04038 (United States); McGann, Andrew J. [Institute for Integrative Bird Behavior Studies, Department of Biology, College of William and Mary, PO Box 8795, Williamsburg, VA 23187 (United States); Schmerfeld, John [U.S. Fish and Wildlife Service, 6669 Short Lane, Gloucester, VA 23061 (United States); Cristol, Daniel A. [Institute for Integrative Bird Behavior Studies, Department of Biology, College of William and Mary, PO Box 8795, Williamsburg, VA 23187 (United States)

    2011-12-15

    Mercury (Hg) is a persistent environmental contaminant found in many freshwater and marine ecosystems. Historical Hg contamination in rivers can impact the surrounding terrestrial ecosystem, but there is little known about how far downstream this contamination persists. In 2009, we sampled terrestrial forest songbirds at five floodplain sites up to 137 km downstream of an historical source of Hg along the South and South Fork Shenandoah Rivers (Virginia, USA). We found that blood total Hg concentrations remained elevated over the entire sampling area and there was little evidence of decline with distance. While it is well known that Hg is a pervasive and long-lasting aquatic contaminant, it has only been recently recognized that it also biomagnifies effectively in floodplain forest food webs. This study extends the area of concern for terrestrial habitats near contaminated rivers for more than 100 km downstream from a waterborne Hg point source. - Highlights: > We report blood mercury levels for terrestrial songbirds downstream of contamination. > Blood mercury levels remain elevated above reference for at least 137 km downstream. > Trends vary based on foraging guild and migration strategy. > Mercury affects terrestrial biota farther downstream than previously documented. - Blood mercury levels of forest songbirds remain elevated above reference levels for at least 137 km downstream of historical point source.

  13. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    International Nuclear Information System (INIS)

    Tohme, Michel S; Qi Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 x 3 line phantom, an ultra-micro resolution phantom and a 22 Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  14. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local universe

    DEFF Research Database (Denmark)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-01-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe....... Assuming that the distribution of the neutrino sources follows that of matter we look for correlations between `warm' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance...... (including that of IceCube-Gen2) we demonstrate that sources with local density exceeding $10^{-6} \\, \\text{Mpc}^{-3}$ and neutrino luminosity $L_{\

  15. A systematic analysis of the Braitenberg vehicle 2b for point-like stimulus sources

    International Nuclear Information System (INIS)

    Rañó, Iñaki

    2012-01-01

    Braitenberg vehicles have been used experimentally for decades in robotics with limited empirical understanding. This paper presents the first mathematical model of the vehicle 2b, displaying so-called aggression behaviour, and analyses the possible trajectories for point-like smooth stimulus sources. This sensory-motor steering control mechanism is used to implement biologically grounded target approach, target-seeking or obstacle-avoidance behaviour. However, the analysis of the resulting model reveals that complex and unexpected trajectories can result even for point-like stimuli. We also prove how the implementation of the controller and the vehicle morphology interact to affect the behaviour of the vehicle. This work provides a better understanding of Braitenberg vehicle 2b, explains experimental results and paves the way for a formally grounded application on robotics as well as for a new way of understanding target seeking in biology. (paper)

  16. Assessment of Groundwater Susceptibility to Non-Point Source Contaminants Using Three-Dimensional Transient Indexes.

    Science.gov (United States)

    Zhang, Yong; Weissmann, Gary S; Fogg, Graham E; Lu, Bingqing; Sun, HongGuang; Zheng, Chunmiao

    2018-06-05

    Groundwater susceptibility to non-point source contamination is typically quantified by stable indexes, while groundwater quality evolution (or deterioration globally) can be a long-term process that may last for decades and exhibit strong temporal variations. This study proposes a three-dimensional (3- d ), transient index map built upon physical models to characterize the complete temporal evolution of deep aquifer susceptibility. For illustration purposes, the previous travel time probability density (BTTPD) approach is extended to assess the 3- d deep groundwater susceptibility to non-point source contamination within a sequence stratigraphic framework observed in the Kings River fluvial fan (KRFF) aquifer. The BTTPD, which represents complete age distributions underlying a single groundwater sample in a regional-scale aquifer, is used as a quantitative, transient measure of aquifer susceptibility. The resultant 3- d imaging of susceptibility using the simulated BTTPDs in KRFF reveals the strong influence of regional-scale heterogeneity on susceptibility. The regional-scale incised-valley fill deposits increase the susceptibility of aquifers by enhancing rapid downward solute movement and displaying relatively narrow and young age distributions. In contrast, the regional-scale sequence-boundary paleosols within the open-fan deposits "protect" deep aquifers by slowing downward solute movement and displaying a relatively broad and old age distribution. Further comparison of the simulated susceptibility index maps to known contaminant distributions shows that these maps are generally consistent with the high concentration and quick evolution of 1,2-dibromo-3-chloropropane (DBCP) in groundwater around the incised-valley fill since the 1970s'. This application demonstrates that the BTTPDs can be used as quantitative and transient measures of deep aquifer susceptibility to non-point source contamination.

  17. The XMM-SERVS survey: new XMM-Newton point-source catalog for the XMM-LSS field

    Science.gov (United States)

    Chen, C.-T. J.; Brandt, W. N.; Luo, B.; Ranalli, P.; Yang, G.; Alexander, D. M.; Bauer, F. E.; Kelson, D. D.; Lacy, M.; Nyland, K.; Tozzi, P.; Vito, F.; Cirasuolo, M.; Gilli, R.; Jarvis, M. J.; Lehmer, B. D.; Paolillo, M.; Schneider, D. P.; Shemmer, O.; Smail, I.; Sun, M.; Tanaka, M.; Vaccari, M.; Vignali, C.; Xue, Y. Q.; Banerji, M.; Chow, K. E.; Häußler, B.; Norris, R. P.; Silverman, J. D.; Trump, J. R.

    2018-04-01

    We present an X-ray point-source catalog from the XMM-Large Scale Structure survey region (XMM-LSS), one of the XMM-Spitzer Extragalactic Representative Volume Survey (XMM-SERVS) fields. We target the XMM-LSS region with 1.3 Ms of new XMM-Newton AO-15 observations, transforming the archival X-ray coverage in this region into a 5.3 deg2 contiguous field with uniform X-ray coverage totaling 2.7 Ms of flare-filtered exposure, with a 46 ks median PN exposure time. We provide an X-ray catalog of 5242 sources detected in the soft (0.5-2 keV), hard (2-10 keV), and/or full (0.5-10 keV) bands with a 1% expected spurious fraction determined from simulations. A total of 2381 new X-ray sources are detected compared to previous source catalogs in the same area. Our survey has flux limits of 1.7 × 10-15, 1.3 × 10-14, and 6.5 × 10-15 erg cm-2 s-1 over 90% of its area in the soft, hard, and full bands, respectively, which is comparable to those of the XMM-COSMOS survey. We identify multiwavelength counterpart candidates for 99.9% of the X-ray sources, of which 93% are considered as reliable based on their matching likelihood ratios. The reliabilities of these high-likelihood-ratio counterparts are further confirmed to be ≈97% reliable based on deep Chandra coverage over ≈5% of the XMM-LSS region. Results of multiwavelength identifications are also included in the source catalog, along with basic optical-to-infrared photometry and spectroscopic redshifts from publicly available surveys. We compute photometric redshifts for X-ray sources in 4.5 deg2 of our field where forced-aperture multi-band photometry is available; >70% of the X-ray sources in this subfield have either spectroscopic or high-quality photometric redshifts.

  18. Point-Source Contributions to the Water Quality of an Urban Stream

    Science.gov (United States)

    Little, S. F. B.; Young, M.; Lowry, C.

    2014-12-01

    Scajaquada Creek, which runs through the heart of the city of Buffalo, is a prime example of the ways in which human intervention and local geomorphology can impact water quality and urban hydrology. Beginning in the 1920's, the Creek has been partially channelized and connected to Buffalo's combined sewer system (CSS). At Forest Lawn Cemetery, where this study takes place, Scajaquada Creek emerges from a 3.5-mile tunnel built to route stream flow under the city. Collocated with the tunnel outlet is a discharge point for Buffalo's CSS, combined sewer outlet (CSO) #53. It is at this point that runoff and sanitary sewage discharge regularly during rain events. Initially, this study endeavored to create a spatial and temporal picture for this portion of the Creek, monitoring such parameters as conductivity, dissolved oxygen, pH, temperature, and turbidity, in addition to measuring Escherichia coli (E. coli) concentrations. As expected, these factors responded directly to seasonality, local geomorphology, and distance from the point source (CSO #53), displaying a overall, linear response. However, the addition of nitrate and phosphate testing to the study revealed an entirely separate signal from that previously observed. Concentrations of these parameters did not respond to location in the same manner as E. coli. Instead of decreasing with distance from the CSO, a distinct periodicity was observed, correlating with a series of outflow pipes lining the stream banks. It is hypothesized that nitrate and phosphate occurring in this stretch of Scajaquada Creek originate not from the CSO, but from fertilizers used to maintain the lawns within the subwatershed. These results provide evidence of the complexity related to water quality issues in urban streams as a result of point- and nonpoint-source hydrologic inputs.

  19. Crowd-sourced BMS point matching and metadata maintenance with Babel

    DEFF Research Database (Denmark)

    Fürst, Jonathan; Chen, Kaifei; Katz, Randy H.

    2016-01-01

    Cyber-physical applications, deployed on top of Building Management Systems (BMS), promise energy saving and comfort improvement in non-residential buildings. Such applications are so far mainly deployed as research prototypes. The main roadblock to widespread adoption is the low quality of BMS...... systems. Such applications access sensors and actuators through BMS metadata in form of point labels. The naming of labels is however often inconsistent and incomplete. To tackle this problem, we introduce Babel, a crowd-sourced approach to the creation and maintenance of BMS metadata. In our system...

  20. Point-source reconstruction with a sparse light-sensor array for optical TPC readout

    International Nuclear Information System (INIS)

    Rutter, G; Richards, M; Bennieston, A J; Ramachers, Y A

    2011-01-01

    A reconstruction technique for sparse array optical signal readout is introduced and applied to the generic challenge of large-area readout of a large number of point light sources. This challenge finds a prominent example in future, large volume neutrino detector studies based on liquid argon. It is concluded that the sparse array option may be ruled out for reasons of required number of channels when compared to a benchmark derived from charge readout on wire-planes. Smaller-scale detectors, however, could benefit from this technology.

  1. Magnox fuel inventories. Experiment and calculation using a point source model

    International Nuclear Information System (INIS)

    Nair, S.

    1978-08-01

    The results of calculations of Magnox fuel inventories using the point source code RICE and associated Magnox reactor data set have been compared with experimental measurements for the actinide isotopes 234 , 235 , 236 , 238 U, 238 , 239 , 240 , 241 , 242 Pu, 241 , 243 Am and 242 , 244 Cm and the fission product isotopes 142 , 143 , 144 , 145 , 146 , 150 Nd, 95 Zr, 134 , 137 Cs, 144 Ce and daughter 144 Pr produced in four samples of spent Magnox fuel spanning the burnup range 3000 to 9000 MWd/Te. The neutron emissions from a further two samples were also measured and compared with RICE predictions. The results of the comparison were such as to justify the use of the code RICE for providing source terms for environmental impact studies, for the isotopes considered in the present work. (author)

  2. The 1.4-2.7 micron spectrum of the point source at the galactic center

    Science.gov (United States)

    Treffers, R. R.; Fink, U.; Larson, H. P.; Gautier, T. N., III

    1976-01-01

    The spectrum of the 2-micron point source at the galactic center is presented over the range from 1.4 to 2.7 microns. The two-level-transition CO band heads are seen near 2.3 microns, confirming that the radiation from this source is due to a cool supergiant star. The heliocentric radial velocity is found to be - 173 (+ or -90) km/s and is consistent with the star being in orbit about a dense galactic nucleus. No evidence is found for Brackett-gamma emission, and no interstellar absorption features are seen. Upper limits for the column densities of interstellar H2, CH4, CO, and NH3 are derived.

  3. Development of uniform hazard response spectra for rock sites considering line and point sources of earthquakes

    International Nuclear Information System (INIS)

    Ghosh, A.K.; Kushwaha, H.S.

    2001-12-01

    Traditionally, the seismic design basis ground motion has been specified by normalised response spectral shapes and peak ground acceleration (PGA). The mean recurrence interval (MRI) used to computed for PGA only. It is shown that the MRI associated with such response spectra are not the same at all frequencies. The present work develops uniform hazard response spectra i.e. spectra having the same MRI at all frequencies for line and point sources of earthquakes by using a large number of strong motion accelerograms recorded on rock sites. Sensitivity of the number of the results to the changes in various parameters has also been presented. This work is an extension of an earlier work for aerial sources of earthquakes. These results will help to determine the seismic hazard at a given site and the associated uncertainities. (author)

  4. Correction of head movements in positron emission tomography using point source tracking system: a simulation study.

    Science.gov (United States)

    Nazarparvar, Babak; Shamsaei, Mojtaba; Rajabi, Hossein

    2012-01-01

    The motion of the head during brain positron emission tomography (PET) acquisitions has been identified as a source of artifact in the reconstructed image. In this study, a method is described to develop an image-based motion correction technique for correcting the post-acquisition data without using external optical motion-tracking system such as POLARIS. In this technique, GATE has been used to simulate PET brain scan using point sources mounted around the head to accurately monitor the position of the head during the time frames. The measurement of head motion in each frame showed a transformation in the image frame matrix, resulting in a fully corrected data set. Using different kinds of phantoms and motions, the accuracy of the correction method is tested and its applicability to experimental studies is demonstrated as well.

  5. The acoustic field of a point source in a uniform boundary layer over an impedance plane

    Science.gov (United States)

    Zorumski, W. E.; Willshire, W. L., Jr.

    1986-01-01

    The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.

  6. Dynamic Control of Particle Deposition in Evaporating Droplets by an External Point Source of Vapor.

    Science.gov (United States)

    Malinowski, Robert; Volpe, Giovanni; Parkin, Ivan P; Volpe, Giorgio

    2018-02-01

    The deposition of particles on a surface by an evaporating sessile droplet is important for phenomena as diverse as printing, thin-film deposition, and self-assembly. The shape of the final deposit depends on the flows within the droplet during evaporation. These flows are typically determined at the onset of the process by the intrinsic physical, chemical, and geometrical properties of the droplet and its environment. Here, we demonstrate deterministic emergence and real-time control of Marangoni flows within the evaporating droplet by an external point source of vapor. By varying the source location, we can modulate these flows in space and time to pattern colloids on surfaces in a controllable manner.

  7. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    Science.gov (United States)

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  8. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Science.gov (United States)

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  9. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  10. Detection of kinetic change points in piece-wise linear single molecule motion

    Science.gov (United States)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  11. Stochastic Industrial Source Detection Using Lower Cost Methods

    Science.gov (United States)

    Thoma, E.; George, I. J.; Brantley, H.; Deshmukh, P.; Cansler, J.; Tang, W.

    2017-12-01

    Hazardous air pollutants (HAPs) can be emitted from a variety of sources in industrial facilities, energy production, and commercial operations. Stochastic industrial sources (SISs) represent a subcategory of emissions from fugitive leaks, variable area sources, malfunctioning processes, and improperly controlled operations. From the shared perspective of industries and communities, cost-effective detection of mitigable SIS emissions can yield benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations. Methods for SIS detection can be categorized by their spatial regime of operation, ranging from component-level inspection to high-sensitivity kilometer scale surveys. Methods can be temporally intensive (providing snap-shot measures) or sustained in both time-integrated and continuous forms. Each method category has demonstrated utility, however, broad adoption (or routine use) has thus far been limited by cost and implementation viability. Described here are a subset of SIS methods explored by the U.S EPA's next generation emission measurement (NGEM) program that focus on lower cost methods and models. An emerging systems approach that combines multiple forms to help compensate for reduced performance factors of lower cost systems is discussed. A case study of a multi-day HAP emission event observed by a combination of low cost sensors, open-path spectroscopy, and passive samplers is detailed. Early field results of a novel field gas chromatograph coupled with a fast HAP concentration sensor is described. Progress toward near real-time inverse source triangulation assisted by pre-modeled facility profiles using the Los Alamos Quick Urban & Industrial Complex (QUIC) model is discussed.

  12. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  13. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  14. Advances in Candida detection platforms for clinical and point-of-care applications

    Science.gov (United States)

    Safavieh, Mohammadali; Coarsey, Chad; Esiobu, Nwadiuto; Memic, Adnan; Vyas, Jatin Mahesh; Shafiee, Hadi; Asghar, Waseem

    2016-01-01

    Invasive candidiasis remains one of the most serious community and healthcare-acquired infections worldwide. Conventional Candida detection methods based on blood and plate culture are time-consuming and require at least 2–4 days to identify various Candida species. Despite considerable advances for candidiasis detection, the development of simple, compact and portable point-of-care diagnostics for rapid and precise testing that automatically performs cell lysis, nucleic acid extraction, purification and detection still remains a challenge. Here, we systematically review most prominent conventional and nonconventional techniques for the detection of various Candida species, including Candida staining, blood culture, serological testing and nucleic acid-based analysis. We also discuss the most advanced lab on a chip devices for candida detection. PMID:27093473

  15. Prospects for detecting supersymmetric dark matter at Post-LEP benchmark points

    International Nuclear Information System (INIS)

    Ellis, J.; Matchev, K.T.; Feng, J.L.; Ferstl, A.; Olive, K.A.

    2002-01-01

    A new set of supersymmetric benchmark scenarios has recently been proposed in the context of the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking masses, taking into account the constraints from LEP, b→sγ and g μ -2. These points have previously been used to discuss the physics reaches of different accelerators. In this paper, we discuss the prospects for discovering supersymmetric dark matter in these scenarios. We consider direct detection through spin-independent and spin-dependent nuclear scattering, as well as indirect detection through relic annihilations to neutrinos, photons, and positrons. We find that several of the benchmark scenarios offer good prospects for direct detection via spin-independent nuclear scattering and indirect detection via muons produced by neutrinos from relic annihilations inside the Sun, and some models offer good prospects for detecting photons from relic annihilations in the galactic centre. (orig.)

  16. Sensitive detection of point mutation by electrochemiluminescence and DNA ligase-based assay

    Science.gov (United States)

    Zhou, Huijuan; Wu, Baoyan

    2008-12-01

    The technology of single-base mutation detection plays an increasingly important role in diagnosis and prognosis of genetic-based diseases. Here we reported a new method for the analysis of point mutations in genomic DNA through the integration of allele-specific oligonucleotide ligation assay (OLA) with magnetic beads-based electrochemiluminescence (ECL) detection scheme. In this assay the tris(bipyridine) ruthenium (TBR) labeled probe and the biotinylated probe are designed to perfectly complementary to the mutant target, thus a ligation can be generated between those two probes by Taq DNA Ligase in the presence of mutant target. If there is an allele mismatch, the ligation does not take place. The ligation products are then captured onto streptavidin-coated paramagnetic beads, and detected by measuring the ECL signal of the TBR label. Results showed that the new method held a low detection limit down to 10 fmol and was successfully applied in the identification of point mutations from ASTC-α-1, PANC-1 and normal cell lines in codon 273 of TP53 oncogene. In summary, this method provides a sensitive, cost-effective and easy operation approach for point mutation detection.

  17. Fissile material detection and control facility with pulsed neutron sources and digital data processing

    International Nuclear Information System (INIS)

    Romodanov, V.L.; Chernikova, D.N.; Afanasiev, V.V.

    2010-01-01

    Full text: In connection with possible nuclear terrorism, there is long-felt need of devices for effective control of radioactive and fissile materials in the key points of crossing the state borders (airports, seaports, etc.), as well as various customs check-points. In International Science and Technology Center Projects No. 596 and No. 2978, a new physical method and digital technology have been developed for the detection of fissile and radioactive materials in models of customs facilities with a graphite moderator, pulsed neutron source and digital processing of responses from scintillation PSD detectors. Detectability of fissile materials, even those shielded with various radiation-absorbing screens, has been shown. The use of digital processing of scintillation signals in this facility is a necessary element, as neutrons and photons are discriminated in the time dependence of fissile materials responses at such loads on the electronic channels that standard types of spectrometers are inapplicable. Digital processing of neutron and photon responses practically resolves the problem of dead time and allows implementing devices, in which various energy groups of neutrons exist for some time after a pulse of source neutrons. Thus, it is possible to detect fissile materials deliberately concealed with shields having a large cross-section of absorption of photons and thermal neutrons. Two models of detection and the control of fissile materials were advanced: 1. the model based on graphite neutrons moderator and PSD scintillators with digital technology of neutrons and photons responses separation; 2. the model based on plastic scintillators and detecting of time coincidences of fission particles by digital technology. Facilities that count time coincidences of neutrons and photons occurring in the fission of fissile materials can use an Am Li source of neutrons, e.g. that is the case with the AWCC system. The disadvantages of the facility are related to the issues

  18. STRUCTURE LINE DETECTION FROM LIDAR POINT CLOUDS USING TOPOLOGICAL ELEVATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Y. Lo

    2012-07-01

    Full Text Available Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  19. Replacement Condition Detection of Railway Point Machines Using an Electric Current Sensor.

    Science.gov (United States)

    Sa, Jaewon; Choi, Younchang; Chung, Yongwha; Kim, Hee-Young; Park, Daihee; Yoon, Sukhan

    2017-01-29

    Detecting replacement conditions of railway point machines is important to simultaneously satisfy the budget-limit and train-safety requirements. In this study, we consider classification of the subtle differences in the aging effect-using electric current shape analysis-for the purpose of replacement condition detection of railway point machines. After analyzing the shapes of after-replacement data and then labeling the shapes of each before-replacement data, we can derive the criteria that can handle the subtle differences between "does-not-need-to-be-replaced" and "needs-to-be-replaced" shapes. On the basis of the experimental results with in-field replacement data, we confirmed that the proposed method could detect the replacement conditions with acceptable accuracy, as well as provide visual interpretability of the criteria used for the time-series classification.

  20. Replacement Condition Detection of Railway Point Machines Using an Electric Current Sensor

    Science.gov (United States)

    Sa, Jaewon; Choi, Younchang; Chung, Yongwha; Kim, Hee-Young; Park, Daihee; Yoon, Sukhan

    2017-01-01

    Detecting replacement conditions of railway point machines is important to simultaneously satisfy the budget-limit and train-safety requirements. In this study, we consider classification of the subtle differences in the aging effect—using electric current shape analysis—for the purpose of replacement condition detection of railway point machines. After analyzing the shapes of after-replacement data and then labeling the shapes of each before-replacement data, we can derive the criteria that can handle the subtle differences between “does-not-need-to-be-replaced” and “needs-to-be-replaced” shapes. On the basis of the experimental results with in-field replacement data, we confirmed that the proposed method could detect the replacement conditions with acceptable accuracy, as well as provide visual interpretability of the criteria used for the time-series classification. PMID:28146057

  1. Seasonal and spatial variation of diffuse (non-point) source zinc pollution in a historically metal mined river catchment, UK

    Energy Technology Data Exchange (ETDEWEB)

    Gozzard, E., E-mail: emgo@ceh.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Mayes, W.M., E-mail: W.Mayes@hull.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Potter, H.A.B., E-mail: hugh.potter@environment-agency.gov.uk [Environment Agency England and Wales, c/o Institute for Research on Environment and Sustainability, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Jarvis, A.P., E-mail: a.p.jarvis@ncl.ac.uk [Hydrogeochemical Engineering Research and Outreach Group, School of Civil Engineering and Geosciences, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2011-10-15

    Quantifying diffuse sources of pollution is becoming increasingly important when characterising river catchments in entirety - a prerequisite for environmental management. This study examines both low and high flow events, as well as spatial variability, in order to assess point and diffuse components of zinc pollution within the River West Allen catchment, which lies within the northern England lead-zinc Orefield. Zinc levels in the river are elevated under all flow regimes, and are of environmental concern. Diffuse components are of little importance at low flow, with point source mine water discharges dominating instream zinc concentration and load. During higher river flows 90% of the instream zinc load is attributed to diffuse sources, where inputs from resuspension of metal-rich sediments, and groundwater influx are likely to be more dominant. Remediating point mine water discharges should significantly improve water quality at lower flows, but contribution from diffuse sources will continue to elevate zinc flux at higher flows. - Highlights: > Zinc concentrations breach EU quality thresholds under all river flow conditions. > Contributions from point sources dominate instream zinc dynamics in low flow. > Contributions from diffuse sources dominate instream zinc dynamics in high flow. > Important diffuse sources include river-bed sediment resuspension and groundwater influx. > Diffuse sources would still create significant instream pollution, even with point source treatment. - Diffuse zinc sources are an important source of instream contamination to mine-impacted rivers under varying flow conditions.

  2. [Urban non-point source pollution control by runoff retention and filtration pilot system].

    Science.gov (United States)

    Bai, Yao; Zuo, Jian-E; Gan, Li-Li; Low, Thong Soon; Miao, Heng-Feng; Ruan, Wen-Quan; Huang, Xia

    2011-09-01

    A runoff retention and filtration pilot system was designed and the long-term purification effect of the runoff was monitored. Runoff pollution characters in 2 typical events and treatment effect of the pilot system were analyzed. The results showed that the runoff was severely polluted. Event mean concentrations (EMCs) of SS, COD, TN and TP in the runoff were 361, 135, 7.88 and 0.62 mg/L respectively. The runoff formed by long rain presented an obvious first flush effect. The first 25% flow contributed more than 50% of the total pollutants loading of SS, TP, DTP and PO4(3-). The pilot system could reduce 100% of the non-point source pollution if the volume of the runoff was less than the retention tank. Otherwise the overflow will be purification by the filtration pilot system and the removal rates of SS, COD, TN, TP, DTP and PO4(3-) reached 97.4% , 61.8%, 22.6%, 85.1%, 72.1%, and 85.2% respectively. The system was stable and the removal rate of SS, COD, TN, and TP were 98.6%, 65.4%, 55.1% and 92.6%. The whole system could effectively remove the non-point source pollution caused by runoff.

  3. Current status of agricultural and rural non-point source Pollution assessment in China

    International Nuclear Information System (INIS)

    Ongley, Edwin D.; Zhang Xiaolan; Yu Tao

    2010-01-01

    Estimates of non-point source (NPS) contribution to total water pollution in China range up to 81% for nitrogen and to 93% for phosphorus. We believe these values are too high, reflecting (a) misuse of estimation techniques that were developed in America under very different conditions and (b) lack of specificity on what is included as NPS. We compare primary methods used for NPS estimation in China with their use in America. Two observations are especially notable: empirical research is limited and does not provide an adequate basis for calibrating models nor for deriving export coefficients; the Chinese agricultural situation is so different than that of the United States that empirical data produced in America, as a basis for applying estimation techniques to rural NPS in China, often do not apply. We propose a set of national research and policy initiatives for future NPS research in China. - Estimation techniques used in China for non-point source pollution are evaluated as a basis for recommending future policies and research in NPS studies in China.

  4. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  5. From a water resource to a point pollution source: the daily journey of a coastal urban stream

    Directory of Open Access Journals (Sweden)

    LR. Rörig

    Full Text Available The aim of this study was to understand how a stream ecosystem that flows from its fountainhead to its mouth inside a city, changes from a water resource to a point pollution source. A multidisciplinary descriptive approach was adopted, including the short-term temporal and spatial determination of physical, chemical, biological and ecotoxicological variables. Results showed that water quality rapidly decreases with increasing urbanization, leading the system to acquire raw sewage attributes even in the first hundred meters after the fountainheads. Despite the tidal circulation near the stream mouth being restricted by shallowness, some improvement of the water quality was detected in this area. The multidisciplinary evaluation showed to be useful for obtaining a more realistic understanding of the stream degradation process, and to forecast restoration and mitigation measures.

  6. Detection, Source Location, and Analysis of Volcano Infrasound

    Science.gov (United States)

    McKee, Kathleen F.

    The study of volcano infrasound focuses on low frequency sound from volcanoes, how volcanic processes produce it, and the path it travels from the source to our receivers. In this dissertation we focus on detecting, locating, and analyzing infrasound from a number of different volcanoes using a variety of analysis techniques. These works will help inform future volcano monitoring using infrasound with respect to infrasonic source location, signal characterization, volatile flux estimation, and back-azimuth to source determination. Source location is an important component of the study of volcano infrasound and in its application to volcano monitoring. Semblance is a forward grid search technique and common source location method in infrasound studies as well as seismology. We evaluated the effectiveness of semblance in the presence of significant topographic features for explosions of Sakurajima Volcano, Japan, while taking into account temperature and wind variations. We show that topographic obstacles at Sakurajima cause a semblance source location offset of 360-420 m to the northeast of the actual source location. In addition, we found despite the consistent offset in source location semblance can still be a useful tool for determining periods of volcanic activity. Infrasonic signal characterization follows signal detection and source location in volcano monitoring in that it informs us of the type of volcanic activity detected. In large volcanic eruptions the lowermost portion of the eruption column is momentum-driven and termed the volcanic jet or gas-thrust zone. This turbulent fluid-flow perturbs the atmosphere and produces a sound similar to that of jet and rocket engines, known as jet noise. We deployed an array of infrasound sensors near an accessible, less hazardous, fumarolic jet at Aso Volcano, Japan as an analogue to large, violent volcanic eruption jets. We recorded volcanic jet noise at 57.6° from vertical, a recording angle not normally feasible

  7. Using rare earth elements to trace wind-driven dispersion of sediments from a point source

    Science.gov (United States)

    Van Pelt, R. Scott; Barnes, Melanie C. W.; Strack, John E.

    2018-06-01

    The entrainment and movement of aeolian sediments is determined by the direction and intensity of erosive winds. Although erosive winds may blow from all directions, in most regions there is a predominant direction. Dust emission causes the removal preferentially of soil nutrients and contaminants which may be transported tens to even thousands of kilometers from the source and deposited into other ecosystems. It would be beneficial to understand spatially and temporally how the soil source may be degraded and depositional zones enriched. A stable chemical tracer not found in the soil but applied to the surface of all particles in the surface soil would facilitate this endeavor. This study examined whether solution-applied rare earth elements (REEs) could be used to trace aeolian sediment movement from a point source through space and time at the field scale. We applied erbium nitrate solution to a 5 m2 area in the center of a 100 m diameter field 7854 m2 on the Southern High Plains of Texas. The solution application resulted in a soil-borne concentration three orders of magnitude greater than natively found in the field soil. We installed BSNE sampler masts in circular configurations and collected the trapped sediment weekly. We found that REE-tagged sediment was blown into every sampler mast during the course of the study but that there was a predominant direction of transport during the spring. This preliminary investigation suggests that the REEs provide a viable and incisive technique to study spatial and temporal variation of aeolian sediment movement from specific sources to identifiable locations of deposition or locations through which the sediments were transported as horizontal mass flux and the relative contribution of the specific source to the total mass flux.

  8. Detecting analogical resemblance without retrieving the source analogy.

    Science.gov (United States)

    Kostic, Bogdan; Cleary, Anne M; Severin, Kaye; Miller, Samuel W

    2010-06-01

    We examined whether people can detect analogical resemblance to an earlier experimental episode without being able to recall the experimental source of the analogical resemblance. We used four-word analogies (e.g., robin-nest/beaver-dam), in a variation of the recognition-without-cued-recall method (Cleary, 2004). Participants studied word pairs (e.g., robin-nest) and were shown new word pairs at test, half of which analogically related to studied word pairs (e.g., beaver-dam) and half of which did not. For each test pair, participants first attempted to recall an analogically similar pair from the study list. Then, regardless of whether successful recall occurred, participants were prompted to rate the familiarity of the test pair, which was said to indicate the likelihood that a pair that was analogically similar to the test pair had been studied. Across three experiments, participants demonstrated an ability to detect analogical resemblance without recalling the source analogy. Findings are discussed in terms of their potential relevance to the study of analogical reasoning and insight, as well as to the study of familiarity and recognition memory.

  9. Super-resolution for a point source better than λ/500 using positive refraction

    International Nuclear Information System (INIS)

    Miñano, Juan C; González, Juan C; Benítez, Pablo; Grabovickic, Dejan; Marqués, Ricardo; Delgado, Vicente; Freire, Manuel

    2011-01-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out. (paper)

  10. Super-resolution for a point source better than λ/500 using positive refraction

    Science.gov (United States)

    Miñano, Juan C.; Marqués, Ricardo; González, Juan C.; Benítez, Pablo; Delgado, Vicente; Grabovickic, Dejan; Freire, Manuel

    2011-12-01

    Leonhardt (2009 New J. Phys. 11 093040) demonstrated that the two-dimensional (2D) Maxwell fish eye (MFE) lens can focus perfectly 2D Helmholtz waves of arbitrary frequency; that is, it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a ‘perfect point drain’ located at the corresponding image point. Moreover, a prototype with λ/5 super-resolution property for one microwave frequency has been manufactured and tested (Ma et al 2010 arXiv:1007.2530v1; Ma et al 2010 New J. Phys. 13 033016). However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here, we present steady-state simulations with a non-perfect drain for a device equivalent to the MFE, called the spherical geodesic waveguide (SGW), which predicts up to λ/500 super-resolution close to discrete frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out.

  11. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Science.gov (United States)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  12. A miniaturised image based fluorescence detection system for point-of-care-testing of cocaine abuse

    Science.gov (United States)

    Walczak, Rafał; Krüger, Jan; Moynihan, Shane

    2015-08-01

    In this paper, we describe a miniaturised image-based fluorescence detection system and demonstrate its viability as a highly sensitive tool for point-of-care-analysis of drugs of abuse in human sweat with a focus on monitor individuals for drugs of abuse. Investigations of miniaturised and low power optoelectronic configurations and methodologies for real-time image analysis were successfully carried out. The miniaturised fluorescence detection system was validated against a reference detection system under controlled laboratory conditions by analysing spiked sweat samples in dip stick and then strip with sample pad. As a result of the validation studies, a 1 ng mL-1 limit of detection of cocaine in sweat and full agreement of test results with the reference detection system can be reported. Results of the investigations open the way towards a detection system that integrates a hand-held fluorescence reader and a wearable skinpatch, and which can collect and in situ analyse sweat for the presence of cocaine at any point for up to tenths hours.

  13. Advanced DNA-Based Point-of-Care Diagnostic Methods for Plant Diseases Detection

    Directory of Open Access Journals (Sweden)

    Han Yih Lau

    2017-12-01

    Full Text Available Diagnostic technologies for the detection of plant pathogens with point-of-care capability and high multiplexing ability are an essential tool in the fight to reduce the large agricultural production losses caused by plant diseases. The main desirable characteristics for such diagnostic assays are high specificity, sensitivity, reproducibility, quickness, cost efficiency and high-throughput multiplex detection capability. This article describes and discusses various DNA-based point-of care diagnostic methods for applications in plant disease detection. Polymerase chain reaction (PCR is the most common DNA amplification technology used for detecting various plant and animal pathogens. However, subsequent to PCR based assays, several types of nucleic acid amplification technologies have been developed to achieve higher sensitivity, rapid detection as well as suitable for field applications such as loop-mediated isothermal amplification, helicase-dependent amplification, rolling circle amplification, recombinase polymerase amplification, and molecular inversion probe. The principle behind these technologies has been thoroughly discussed in several review papers; herein we emphasize the application of these technologies to detect plant pathogens by outlining the advantages and disadvantages of each technology in detail.

  14. Trend analysis and change point detection of annual and seasonal temperature series in Peninsular Malaysia

    Science.gov (United States)

    Suhaila, Jamaludin; Yusop, Zulkifli

    2017-06-01

    Most of the trend analysis that has been conducted has not considered the existence of a change point in the time series analysis. If these occurred, then the trend analysis will not be able to detect an obvious increasing or decreasing trend over certain parts of the time series. Furthermore, the lack of discussion on the possible factors that influenced either the decreasing or the increasing trend in the series needs to be addressed in any trend analysis. Hence, this study proposes to investigate the trends, and change point detection of mean, maximum and minimum temperature series, both annually and seasonally in Peninsular Malaysia and determine the possible factors that could contribute to the significance trends. In this study, Pettitt and sequential Mann-Kendall (SQ-MK) tests were used to examine the occurrence of any abrupt climate changes in the independent series. The analyses of the abrupt changes in temperature series suggested that most of the change points in Peninsular Malaysia were detected during the years 1996, 1997 and 1998. These detection points captured by Pettitt and SQ-MK tests are possibly related to climatic factors, such as El Niño and La Niña events. The findings also showed that the majority of the significant change points that exist in the series are related to the significant trend of the stations. Significant increasing trends of annual and seasonal mean, maximum and minimum temperatures in Peninsular Malaysia were found with a range of 2-5 °C/100 years during the last 32 years. It was observed that the magnitudes of the increasing trend in minimum temperatures were larger than the maximum temperatures for most of the studied stations, particularly at the urban stations. These increases are suspected to be linked with the effect of urban heat island other than El Niño event.

  15. MILLIMETER TRANSIENT POINT SOURCES IN THE SPTpol 100 SQUARE DEGREE SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Whitehorn, N.; Haan, T. de; George, E. M. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Natoli, T.; Carlstrom, J. E. [Department of Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Ade, P. A. R. [Cardiff University, Cardiff CF10 3XQ (United Kingdom); Austermann, J. E.; Beall, J. A. [NIST Quantum Devices Group, 325 Broadway Mailcode 817.03, Boulder, CO 80305 (United States); Bender, A. N.; Benson, B. A.; Bleem, L. E.; Chang, C. L.; Citron, R.; Crawford, T. M.; Crites, A. T.; Gallicchio, J. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Chiang, H. C. [School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban (South Africa); Cho, H-M. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Dobbs, M. A. [Department of Physics, McGill University, 3600 Rue University, Montreal, Quebec H3A 2T8 (Canada); Everett, W., E-mail: nwhitehorn@berkeley.edu, E-mail: t.natoli@utoronto.ca [Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309 (United States); and others

    2016-10-20

    The millimeter transient sky is largely unexplored, with measurements limited to follow-up of objects detected at other wavelengths. High-angular-resolution telescopes, designed for measurement of the cosmic microwave background (CMB), offer the possibility to discover new, unknown transient sources in this band—particularly the afterglows of unobserved gamma-ray bursts (GRBs). Here, we use the 10 m millimeter-wave South Pole Telescope, designed for the primary purpose of observing the CMB at arcminute and larger angular scales, to conduct a search for such objects. During the 2012–2013 season, the telescope was used to continuously observe a 100 deg{sup 2} patch of sky centered at R.A. 23{sup h}30{sup m} and decl. −55° using the polarization-sensitive SPTpol camera in two bands centered at 95 and 150 GHz. These 6000 hr of observations provided continuous monitoring for day- to month-scale millimeter-wave transient sources at the 10 mJy level. One candidate object was observed with properties broadly consistent with a GRB afterglow, but at a statistical significance too low ( p = 0.01) to confirm detection.

  16. Detecting cardiometabolic syndrome using World Health Organization public health action points for Asians and Pacific Islanders.

    Science.gov (United States)

    Grandinetti, Andrew; Kaholokula, Joseph K; Mau, Marjorie K; Chow, Dominic C

    2010-01-01

    To assess the screening characteristics of World Health Organization (WHO) body mass index action points for cardiometabolic syndrome (CMS) in Native Hawaiians and people of Asian ancestry (ie, Filipino and Japanese). Cross-sectional data were collected from 1,452 residents of a rural community of Hawai'i between 1997 and 2000, of which 1,198 were analyzed in this study. Ethnic ancestry was determined by self-report. Metabolic status was assessed using National Cholesterol Education Program Adult Treatment Panel III (NCEP-ATPIII) criteria. Screening characteristics of WHO criteria for overweight and obesity were compared to WHO public health action points or to WHO West Pacific Regional Office (WPRO) cut-points. Among Asian-ancestry participants, WHO public health action points improved both sensitivity and specificity for detecting CMS. However, similar improvements were not observed for WPRO criteria for Native Hawaiians. Moreover, predictive values were high regardless of which criteria were utilized due to high CMS prevalence. WHO public health actions points for Asians provide a significant improvement in sensitivity in detection of CMS. However, predictive value, which varies greatly with disease prevalence, should be considered when deciding which criteria to apply.

  17. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    Energy Technology Data Exchange (ETDEWEB)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk [Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  18. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh.

    Science.gov (United States)

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point

  19. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    Science.gov (United States)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  20. Highly sensitive chemiluminescent point mutation detection by circular strand-displacement amplification reaction.

    Science.gov (United States)

    Shi, Chao; Ge, Yujie; Gu, Hongxi; Ma, Cuiping

    2011-08-15

    Single nucleotide polymorphism (SNP) genotyping is attracting extensive attentions owing to its direct connections with human diseases including cancers. Here, we have developed a highly sensitive chemiluminescence biosensor based on circular strand-displacement amplification and the separation by magnetic beads reducing the background signal for point mutation detection at room temperature. This method took advantage of both the T4 DNA ligase recognizing single-base mismatch with high selectivity and the strand-displacement reaction of polymerase to perform signal amplification. The detection limit of this method was 1.3 × 10(-16)M, which showed better sensitivity than that of most of those reported detection methods of SNP. Additionally, the magnetic beads as carrier of immobility was not only to reduce the background signal, but also may have potential apply in high through-put screening of SNP detection in human genome. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Comparison of Birds Detected from Roadside and Off-Road Point Counts in the Shenandoah National Park

    Science.gov (United States)

    Cherry M.E. Keller; Mark R. Fuller

    1995-01-01

    Roadside point counts are generally used for large surveys to increase the number of samples. We examined differences in species detected from roadside versus off-road (200-m and 400-m) point counts in the Shenandoah National Park. We also compared the list of species detected in the first 3 minutes to those detected in 10 minutes for potential species biases. Results...

  2. Detection of aeroacoustic sound sources on aircraft and wind turbines

    International Nuclear Information System (INIS)

    Oerlemans, S.

    2009-01-01

    This thesis deals with the detection of aeroacoustic sound sources on aircraft and wind turbines using phased microphone arrays. First, the reliability of the array technique is assessed using airframe noise measurements in open and closed wind tunnels. It is demonstrated that quantitative acoustic measurements are possible in both wind tunnels. Then, the array technique is applied to characterize the noise sources on two modern large wind turbines. It is shown that practically all noise emitted to the ground is produced by the outer part of the blades during their downward movement. This asymmetric source pattern, which causes the typical swishing noise during the passage of the blades, can be explained by trailing edge noise directivity and convective amplification. Next, a semi-empirical prediction method is developed for the noise from large wind turbines. The prediction code is successfully validated against the experimental results, not only with regard to sound levels, spectra, and directivity, but also with regard to the noise source distribution in the rotor plane and the temporal variation in sound level (swish). The validated prediction method is then applied to calculate wind turbine noise footprints, which show that large swish amplitudes can occur even at large distance. The influence of airfoil shape on blade noise is investigated through acoustic wind tunnel tests on a series of wind turbine airfoils. Measurements are carried out at various wind speeds and angles of attack, with and without upstream turbulence and boundary layer tripping. The speed dependence, directivity, and tonal behaviour are determined for both trailing edge noise and inflow turbulence noise. Finally, two noise reduction concepts are tested on a large wind turbine: acoustically optimized airfoils and trailing edge serrations. Both blade modifications yield a significant trailing edge noise reduction at low frequencies, but also cause increased tip noise at high frequencies

  3. Launching and controlling Gaussian beams from point sources via planar transformation media

    Science.gov (United States)

    Odabasi, Hayrettin; Sainath, Kamalesh; Teixeira, Fernando L.

    2018-02-01

    Based on operations prescribed under the paradigm of complex transformation optics (CTO) [F. Teixeira and W. Chew, J. Electromagn. Waves Appl. 13, 665 (1999), 10.1163/156939399X01104; F. L. Teixeira and W. C. Chew, Int. J. Numer. Model. 13, 441 (2000), 10.1002/1099-1204(200009/10)13:5%3C441::AID-JNM376%3E3.0.CO;2-J; H. Odabasi, F. L. Teixeira, and W. C. Chew, J. Opt. Soc. Am. B 28, 1317 (2011), 10.1364/JOSAB.28.001317; B.-I. Popa and S. A. Cummer, Phys. Rev. A 84, 063837 (2011), 10.1103/PhysRevA.84.063837], it was recently shown in [G. Castaldi, S. Savoia, V. Galdi, A. Alù, and N. Engheta, Phys. Rev. Lett. 110, 173901 (2013), 10.1103/PhysRevLett.110.173901] that a complex source point (CSP) can be mimicked by parity-time (PT ) transformation media. Such coordinate transformation has a mirror symmetry for the imaginary part, and results in a balanced loss/gain metamaterial slab. A CSP produces a Gaussian beam and, consequently, a point source placed at the center of such a metamaterial slab produces a Gaussian beam propagating away from the slab. Here, we extend the CTO analysis to nonsymmetric complex coordinate transformations as put forth in [S. Savoia, G. Castaldi, and V. Galdi, J. Opt. 18, 044027 (2016), 10.1088/2040-8978/18/4/044027] and verify that, by using simply a (homogeneous) doubly anisotropic gain-media metamaterial slab, one can still mimic a CSP and produce Gaussian beam. In addition, we show that a Gaussian-like beams can be produced by point sources placed outside the slab as well. By making use of the extra degrees of freedom (the real and imaginary parts of the coordinate transformation) provided by CTO, the near-zero requirement on the real part of the resulting constitutive parameters can be relaxed to facilitate potential realization of Gaussian-like beams. We illustrate how beam properties such as peak amplitude and waist location can be controlled by a proper choice of (complex-valued) CTO Jacobian elements. In particular, the beam waist

  4. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    Science.gov (United States)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  5. Nanoparticle Detection of Urinary Markers for Point-of-Care Diagnosis of Kidney Injury.

    Directory of Open Access Journals (Sweden)

    Hyun Jung Chung

    Full Text Available The high incidence of acute and chronic kidney injury due to various environmental factors such as heavy metals or chemicals has been a major problem in developing countries. However, the diagnosis of kidney injury in these areas can be more challenging due to the lack of highly sensitive and specific techniques that can be applied in point-of-care settings. To address this, we have developed a technique called 'micro-urine nanoparticle detection (μUNPD', that allows the detection of trace amounts of molecular markers in urine. Specifically, this technique utilizes an automated on-chip assay followed by detection with a hand-held device for the read-out. Using the μUNPD technology, the kidney injury markers KIM-1 and Cystatin C were detected down to concentrations of 0.1 ng/ml and 20 ng/ml respectively, which meets the cut-off range required to identify patients with acute or chronic kidney injury. Thus, we show that the μUNPD technology enables point of care and non-invasive detection of kidney injury, and has potential for applications in diagnosing kidney injury with high sensitivity in resource-limited settings.

  6. Search for point-like sources using the diffuse astrophysical muon-neutrino flux in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Reimann, Rene; Haack, Christian; Raedel, Leif; Schoenen, Sebastian; Schumacher, Lisa; Wiebusch, Christopher [III. Physikalisches Institut B, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    IceCube, a cubic-kilometer sized neutrino detector at the geographic South Pole, has recently confirmed a flux of high-energy astrophysical neutrinos in the track-like muon channel. Although this muon-neutrino flux has now been observed with high significance, no point sources or source classes could be identified yet with these well pointing events. We present a search for point-like sources based on a six year sample of upgoing muon-neutrinos with very low background contamination. To improve the sensitivity, the standard likelihood approach has been modified to focus on the properties of the measured astrophysical muon-neutrino flux.

  7. Broadband integrated mid infrared light sources as enabling technology for point of care mid-infrared spectroscopy

    Science.gov (United States)

    2017-08-20

    AFRL-AFOSR-JP-TR-2017-0061 Broadband integrated mid-infrared light sources as enabling technology for point-of-care mid- infrared spectroscopy Alex...mid-infrared light sources as enabling technology for point-of-care mid-infrared spectroscopy 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4037...Broadband integrated mid-infrared light sources as enabling technology for point-of-care mid- infrared spectroscopy ” Date: 16th August 2017 Name

  8. Simulation of agricultural non-point source pollution in Xichuan by using SWAT model

    Science.gov (United States)

    Xing, Linan; Zuo, Jiane; Liu, Fenglin; Zhang, Xiaohui; Cao, Qiguang

    2018-02-01

    This paper evaluated the applicability of using SWAT to access agricultural non-point source pollution in Xichuan area. In order to build the model, DEM, soil sort and land use map, climate monitoring data were collected as basic database. The SWAT model was calibrated and validated for the SWAT was carried out using streamflow, suspended solids, total phosphorus and total nitrogen records from 2009 to 2011. Errors, coefficient of determination and Nash-Sutcliffe coefficient were considered to evaluate the applicability. The coefficient of determination were 0.96, 0.66, 0.55 and 0.66 for streamflow, SS, TN, and TP, respectively. Nash-Sutcliffe coefficient were 0.93, 0.5, 0.52 and 0.63, respectively. The results all meet the requirements. It suggested that the SWAT model can simulate the study area.

  9. Exposure buildup factors for a cobalt-60 point isotropic source for single and two layer slabs

    International Nuclear Information System (INIS)

    Chakarova, R.

    1992-01-01

    Exposure buildup factors for point isotropic cobalt-60 sources are calculated by the Monte Carlo method with statistical errors ranging from 1.5 to 7% for 1-5 mean free paths (mfp) thick water and iron single slabs and for 1 and 2 mfp iron layers followed by water layers 1-5 mfp thick. The computations take into account Compton scattering. The Monte Carlo data for single slab geometries are approximated by Geometric Progression formula. Kalos's formula using the calculated single slab buildup factors may be applied to reproduce the data for two-layered slabs. The presented results and discussion may help when choosing the manner in which the radiation field gamma irradiation units will be described. (author)

  10. Large-Eddy Simulation of Chemically Reactive Pollutant Transport from a Point Source in Urban Area

    Science.gov (United States)

    Du, Tangzheng; Liu, Chun-Ho

    2013-04-01

    Most air pollutants are chemically reactive so using inert scalar as the tracer in pollutant dispersion modelling would often overlook their impact on urban inhabitants. In this study, large-eddy simulation (LES) is used to examine the plume dispersion of chemically reactive pollutants in a hypothetical atmospheric boundary layer (ABL) in neutral stratification. The irreversible chemistry mechanism of ozone (O3) titration is integrated into the LES model. Nitric oxide (NO) is emitted from an elevated point source in a rectangular spatial domain doped with O3. The LES results are compared well with the wind tunnel results available in literature. Afterwards, the LES model is applied to idealized two-dimensional (2D) street canyons of unity aspect ratio to study the behaviours of chemically reactive plume over idealized urban roughness. The relation among various time scales of reaction/turbulence and dimensionless number are analysed.

  11. Use of multiple water surface flow constructed wetlands for non-point source water pollution control.

    Science.gov (United States)

    Li, Dan; Zheng, Binghui; Liu, Yan; Chu, Zhaosheng; He, Yan; Huang, Minsheng

    2018-05-02

    Multiple free water surface flow constructed wetlands (multi-FWS CWs) are a variety of conventional water treatment plants for the interception of pollutants. This review encapsulated the characteristics and applications in the field of ecological non-point source water pollution control technology. The roles of in-series design and operation parameters (hydraulic residence time, hydraulic load rate, water depth and aspect ratio, composition of influent, and plant species) for performance intensification were also analyzed, which were crucial to achieve sustainable and effective contaminants removal, especially the retention of nutrient. The mechanism study of design and operation parameters for the removal of nitrogen and phosphorus was also highlighted. Conducive perspectives for further research on optimizing its design/operation parameters and advanced technologies of ecological restoration were illustrated to possibly interpret the functions of multi-FWS CWs.

  12. Risk-based prioritization of ground water threatening point sources at catchment and regional scales

    DEFF Research Database (Denmark)

    Overheu, Niels Døssing; Tuxen, Nina; Flyvbjerg, John

    2014-01-01

    framework has been developed to enable a systematic and transparent risk assessment and prioritization of contaminant point sources, considering the local, catchment, or regional scales (Danish EPA, 2011, 2012). The framework has been tested in several catchments in Denmark with different challenges...... and needs, and two of these are presented. Based on the lessons learned, the Danish EPA has prepared a handbook to guide the user through the steps in a risk-based prioritization (Danish EPA, 2012). It provides guidance on prioritization both in an administratively defined area such as a Danish Region...... of the results are presented using the case studies as examples. The methodology was developed by a broad industry group including the Danish EPA, the Danish Regions, the Danish Nature Agency, the Technical University of Denmark, and consultants — and the framework has been widely accepted by the professional...

  13. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    International Nuclear Information System (INIS)

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  14. Preliminary limits on the flux of muon neutrinos from extraterrestrial point sources

    International Nuclear Information System (INIS)

    Bionta, R.M.; Blewitt, G.; Bratton, C.B.

    1985-01-01

    We present the arrival directions of 117 upward-going muon events collected with the IMB proton lifetime detector during 317 days of live detector operation. The rate of upward-going muons observed in our detector was found to be consistent with the rate expected from atmospheric neutrino production. The upper limit on the total flux of extraterrestrial neutrinos >1 GeV is 2 -sec. Using our data and a Monte Carlo simulation of high energy muon production in the earth surrounding the detector, we place limits on the flux of neutrinos from a point source in the Vela X-2 system of 2 -sec with E > 1 GeV. 6 refs., 5 figs

  15. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  16. A Research on Fast Face Feature Points Detection on Smart Mobile Devices

    Directory of Open Access Journals (Sweden)

    Xiaohe Li

    2018-01-01

    Full Text Available We explore how to leverage the performance of face feature points detection on mobile terminals from 3 aspects. First, we optimize the models used in SDM algorithms via PCA and Spectrum Clustering. Second, we propose an evaluation criterion using Linear Discriminative Analysis to choose the best local feature descriptions which plays a critical role in feature points detection. Third, we take advantage of multicore architecture of mobile terminal and parallelize the optimized SDM algorithm to improve the efficiency further. The experiment observations show that our final accomplished GPC-SDM (improved Supervised Descent Method using spectrum clustering, PCA, and GPU acceleration suppresses the memory usage, which is beneficial and efficient to meet the real-time requirements.

  17. End point detection in ion milling processes by sputter-induced optical emission spectroscopy

    International Nuclear Information System (INIS)

    Lu, C.; Dorian, M.; Tabei, M.; Elsea, A.

    1984-01-01

    The characteristic optical emission from the sputtered material during ion milling processes can provide an unambiguous indication of the presence of the specific etched species. By monitoring the intensity of a representative emission line, the etching process can be precisely terminated at an interface. Enhancement of the etching end point is possible by using a dual-channel photodetection system operating in a ratio or difference mode. The installation of the optical detection system to an existing etching chamber has been greatly facilitated by the use of optical fibers. Using a commercial ion milling system, experimental data for a number of etching processes have been obtained. The result demonstrates that sputter-induced optical emission spectroscopy offers many advantages over other techniques in detecting the etching end point of ion milling processes

  18. A Colorimetric Enzyme-Linked Immunosorbent Assay (ELISA) Detection Platform for a Point-of-Care Dengue Detection System on a Lab-on-Compact-Disc

    Science.gov (United States)

    Thiha, Aung; Ibrahim, Fatimah

    2015-01-01

    The enzyme-linked Immunosorbent Assay (ELISA) is the gold standard clinical diagnostic tool for the detection and quantification of protein biomarkers. However, conventional ELISA tests have drawbacks in their requirement of time, expensive equipment and expertise for operation. Hence, for the purpose of rapid, high throughput screening and point-of-care diagnosis, researchers are miniaturizing sandwich ELISA procedures on Lab-on-a-Chip and Lab-on-Compact Disc (LOCD) platforms. This paper presents a novel integrated device to detect and interpret the ELISA test results on a LOCD platform. The system applies absorption spectrophotometry to measure the absorbance (optical density) of the sample using a monochromatic light source and optical sensor. The device performs automated analysis of the results and presents absorbance values and diagnostic test results via a graphical display or via Bluetooth to a smartphone platform which also acts as controller of the device. The efficacy of the device was evaluated by performing dengue antibody IgG ELISA on 64 hospitalized patients suspected of dengue. The results demonstrate high accuracy of the device, with 95% sensitivity and 100% specificity in detection when compared with gold standard commercial ELISA microplate readers. This sensor platform represents a significant step towards establishing ELISA as a rapid, inexpensive and automatic testing method for the purpose of point-of-care-testing (POCT) in resource-limited settings. PMID:25993517

  19. A Colorimetric Enzyme-Linked Immunosorbent Assay (ELISA) Detection Platform for a Point-of-Care Dengue Detection System on a Lab-on-Compact-Disc.

    Science.gov (United States)

    Thiha, Aung; Ibrahim, Fatimah

    2015-05-18

    The enzyme-linked Immunosorbent Assay (ELISA) is the gold standard clinical diagnostic tool for the detection and quantification of protein biomarkers. However, conventional ELISA tests have drawbacks in their requirement of time, expensive equipment and expertise for operation. Hence, for the purpose of rapid, high throughput screening and point-of-care diagnosis, researchers are miniaturizing sandwich ELISA procedures on Lab-on-a-Chip and Lab-on-Compact Disc (LOCD) platforms. This paper presents a novel integrated device to detect and interpret the ELISA test results on a LOCD platform. The system applies absorption spectrophotometry to measure the absorbance (optical density) of the sample using a monochromatic light source and optical sensor. The device performs automated analysis of the results and presents absorbance values and diagnostic test results via a graphical display or via Bluetooth to a smartphone platform which also acts as controller of the device. The efficacy of the device was evaluated by performing dengue antibody IgG ELISA on 64 hospitalized patients suspected of dengue. The results demonstrate high accuracy of the device, with 95% sensitivity and 100% specificity in detection when compared with gold standard commercial ELISA microplate readers. This sensor platform represents a significant step towards establishing ELISA as a rapid, inexpensive and automatic testing method for the purpose of point-of-care-testing (POCT) in resource-limited settings.

  20. Carbon dioxide capture and separation techniques for advanced power generation point sources

    Energy Technology Data Exchange (ETDEWEB)

    Pennline, H.W.; Luebke, D.R.; Morsi, B.I.; Heintz, Y.J.; Jones, K.L.; Ilconich, J.B.

    2006-09-01

    The capture/separation step for carbon dioxide (CO2) from large-point sources is a critical one with respect to the technical feasibility and cost of the overall carbon sequestration scenario. For large-point sources, such as those found in power generation, the carbon dioxide capture techniques being investigated by the in-house research area of the National Energy Technology Laboratory possess the potential for improved efficiency and costs as compared to more conventional technologies. The investigated techniques can have wide applications, but the research has focused on capture/separation of carbon dioxide from flue gas (postcombustion from fossil fuel-fired combustors) and from fuel gas (precombustion, such as integrated gasification combined cycle – IGCC). With respect to fuel gas applications, novel concepts are being developed in wet scrubbing with physical absorption; chemical absorption with solid sorbents; and separation by membranes. In one concept, a wet scrubbing technique is being investigated that uses a physical solvent process to remove CO2 from fuel gas of an IGCC system at elevated temperature and pressure. The need to define an ideal solvent has led to the study of the solubility and mass transfer properties of various solvents. Fabrication techniques and mechanistic studies for hybrid membranes separating CO2 from the fuel gas produced by coal gasification are also being performed. Membranes that consist of CO2-philic silanes incorporated into an alumina support or ionic liquids encapsulated into a polymeric substrate have been investigated for permeability and selectivity. An overview of two novel techniques is presented along with a research progress status of each technology.

  1. Carbon Dioxide Capture and Separation Techniques for Gasification-based Power Generation Point Sources

    Energy Technology Data Exchange (ETDEWEB)

    Pennline, H.W.; Luebke, D.R.; Jones, K.L.; Morsi, B.I. (Univ. of Pittsburgh, PA); Heintz, Y.J. (Univ. of Pittsburgh, PA); Ilconich, J.B. (Parsons)

    2007-06-01

    The capture/separation step for carbon dioxide (CO2) from large-point sources is a critical one with respect to the technical feasibility and cost of the overall carbon sequestration scenario. For large-point sources, such as those found in power generation, the carbon dioxide capture techniques being investigated by the in-house research area of the National Energy Technology Laboratory possess the potential for improved efficiency and reduced costs as compared to more conventional technologies. The investigated techniques can have wide applications, but the research has focused on capture/separation of carbon dioxide from flue gas (post-combustion from fossil fuel-fired combustors) and from fuel gas (precombustion, such as integrated gasification combined cycle or IGCC). With respect to fuel gas applications, novel concepts are being developed in wet scrubbing with physical absorption; chemical absorption with solid sorbents; and separation by membranes. In one concept, a wet scrubbing technique is being investigated that uses a physical solvent process to remove CO2 from fuel gas of an IGCC system at elevated temperature and pressure. The need to define an ideal solvent has led to the study of the solubility and mass transfer properties of various solvents. Pertaining to another separation technology, fabrication techniques and mechanistic studies for membranes separating CO2 from the fuel gas produced by coal gasification are also being performed. Membranes that consist of CO2-philic ionic liquids encapsulated into a polymeric substrate have been investigated for permeability and selectivity. Finally, dry, regenerable processes based on sorbents are additional techniques for CO2 capture from fuel gas. An overview of these novel techniques is presented along with a research progress status of technologies related to membranes and physical solvents.

  2. Field-scale operation of methane biofiltration systems to mitigate point source methane emissions

    International Nuclear Information System (INIS)

    Hettiarachchi, Vijayamala C.; Hettiaratchi, Patrick J.; Mehrotra, Anil K.; Kumar, Sunil

    2011-01-01

    Methane biofiltration (MBF) is a novel low-cost technique for reducing low volume point source emissions of methane (CH 4 ). MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting CH 4 to carbon dioxide (CO 2 ) and water (H 2 O). A field research program was undertaken to evaluate the potential to treat low volume point source engineered CH 4 emissions using an MBF at a natural gas monitoring station. A new comprehensive three-dimensional numerical model was developed incorporating advection-diffusive flow of gas, biological reactions and heat and moisture flow. The one-dimensional version of this model was used as a guiding tool for designing and operating the MBF. The long-term monitoring results of the field MBF are also presented. The field MBF operated with no control of precipitation, evaporation, and temperature, provided more than 80% of CH 4 oxidation throughout spring, summer, and fall seasons. The numerical model was able to predict the CH 4 oxidation behavior of the field MBF with high accuracy. The numerical model simulations are presented for estimating CH 4 oxidation efficiencies under various operating conditions, including different filter bed depths and CH 4 flux rates. The field observations as well as numerical model simulations indicated that the long-term performance of MBFs is strongly dependent on environmental factors, such as ambient temperature and precipitation. - Highlights: → One-dimensional version of the model was used as a guiding tool for designing and operating the MBF. → Mathematical model predicted CH 4 oxidation behaviors of the field MBF with high accuracy i.e. (> 80 %). → Performance of MBF is dependent on ambient temperature and precipitation. - The developed numerical model simulations and field observations for estimating CH 4 oxidation efficiencies under various operating conditions indicate that the long-term performance of MBFs is strongly

  3. Statistical data evaluation in mobile gamma spectrometry. An optimisation of on-line search strategies in the scenario of lost point sources

    International Nuclear Information System (INIS)

    Hjerpe, T.; Samuelsson, C.

    1999-01-01

    There is a potential risk that hazardous radioactive sources could enter the environment, e.g. via satellite debris, smuggled radioactive goods or lost metal scrap. From a radiation protection point of view there is a need for rapid and reliable methods for locating and identifying sources. Car-borne and air-borne detector systems are suitable for the task. The condition in this work is a situation where the missing radionuclide is known, which is not an unlikely scenario. The possibility that the source is located near a road can be high, and thus motivating a car-borne spectrometer system. The main object is to optimise on-line statistical methods in order to achieve a high probability for locating point sources, or hot spots, and still have reasonably few false alarms from variations in the natural background radiation. Data were obtained from a car-borne 3 litres (NaI(Tl) detector and two point sources, located at various distances from the road. The nuclides used were 137 Cs and 131 I. Spectra were measured stationary on the road. From these measurements spectra we have reconstructed spectra applicable to different speed and sampling times; the time 3 seconds and 50 km/h are used in this work. The maximum distance a source can be located from the road and still be detected is estimated with four different statistical analysis methods. This distance is called the detection distance, DD. The method is applied on gross counts in the full energy peak window. For each method alarm thresholds has been calculated from background data obtained in Scania (Skaane), in the south of Sweden. The results show a 30-50% difference in DD's. With this semi-theoretical approach, the two sources could be detected from 250 m ( 137 Cs, 6GBq) and 200 m ( 131 I, 4GBq). (au)

  4. Electrically-detected electron paramagnetic resonance of point centers in 6H-SiC nanostructures

    Czech Academy of Sciences Publication Activity Database

    Bagraev, N.T.; Gets, D.S.; Kalabukhova, E.N.; Klyachkin, L.E.; Malyarenko, A.M.; Mashkov, V.A.; Savchenko, Dariia; Shanina, B.D.

    2014-01-01

    Roč. 48, č. 11 (2014), s. 1467-1480 ISSN 1063-7826 R&D Projects: GA MŠk(CZ) LM2011029 Grant - others:SAFMAT(XE) CZ.2.16/3.1.00/22132 Institutional support: RVO:68378271 Keywords : electron paramagnetic resonance * electrically- detected electron paramagnetic resonance * 6H -SiC nanostructures * nitrogen-vacancy defect * point defect Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.739, year: 2014

  5. Neutron activation analysis detection limits using 252Cf sources

    International Nuclear Information System (INIS)

    DiPrete, D.P.; Sigg, R.A.

    2000-01-01

    The Savannah River Technology Center (SRTC) developed a neutron activation analysis (NAA) facility several decades ago using low-flux 252 Cf neutron sources. Through this time, the facility has addressed areas of applied interest in managing the Savannah River Site (SRS). Some applications are unique because of the site's operating history and its chemical-processing facilities. Because sensitivity needs for many applications are not severe, they can be accomplished using an ∼6-mg 252 Cf NAA facility. The SRTC 252 Cf facility continues to support applied research programs at SRTC as well as other SRS programs for environmental and waste management customers. Samples analyzed by NAA include organic compounds, metal alloys, sediments, site process solutions, and many other materials. Numerous radiochemical analyses also rely on the facility for production of short-lived tracers, yielding by activation of carriers and small-scale isotope production for separation methods testing. These applications are more fully reviewed in Ref. 1. Although the flux [approximately2 x 10 7 n/cm 2 ·s] is low relative to reactor facilities, more than 40 elements can be detected at low and sub-part-per-million levels. Detection limits provided by the facility are adequate for many analytical projects. Other multielement analysis methods, particularly inductively coupled plasma atomic emission and inductively coupled plasma mass spectrometry, can now provide sensitivities on dissolved samples that are often better than those available by NAA using low-flux isotopic sources. Because NAA allows analysis of bulk samples, (a) it is a more cost-effective choice when its sensitivity is adequate than methods that require digestion and (b) it eliminates uncertainties that can be introduced by digestion processes

  6. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  7. An effective dose assessment technique with NORM added consumer products using skin-point source on computational human phantom

    International Nuclear Information System (INIS)

    Yoo, Do Hyeon; Shin, Wook-Geun; Lee, Hyun Cheol; Choi, Hyun Joon; Testa, Mauro; Lee, Jae Kook; Yeom, Yeon Soo; Kim, Chan Hyeong; Min, Chul Hee

    2016-01-01

    The aim of this study is to develop the assessment technique of the effective dose by calculating the organ equivalent dose with a Monte Carlo (MC) simulation and a computational human phantom for the naturally occurring radioactive material (NORM) added consumer products. In this study, we suggests the method determining the MC source term based on the skin-point source enabling the convenient and conservative modeling of the various type of the products. To validate the skin-point source method, the organ equivalent doses were compared with that by the product modeling source of the realistic shape for the pillow, waist supporter, sleeping mattress etc. Our results show that according to the source location, the organ equivalent doses were observed as the similar tendency for both source determining methods, however, it was observed that the annual effective dose with the skin-point source was conservative than that with the modeling source with the maximum 3.3 times higher dose. With the assumption of the gamma energy of 1 MeV and product activity of 1 Bq g"−"1, the annual effective doses of the pillow, waist supporter and sleeping mattress with skin-point source was 3.09E-16 Sv Bq"−"1 year"−"1, 1.45E-15 Sv Bq"−"1 year"−"1, and 2,82E-16 Sv Bq"−"1 year"−"1, respectively, while the product modeling source showed 9.22E-17 Sv Bq"−"1 year"−"1, 9.29E-16 Sv Bq"−"1 year"−"1, and 8.83E-17 Sv Bq"−"1 year"−"1, respectively. In conclusion, it was demonstrated in this study that the skin-point source method could be employed to efficiently evaluate the annual effective dose due to the usage of the NORM added consumer products. - Highlights: • We evaluate the exposure dose from the usage of NORM added consumer products. • We suggest the method determining the MC source term based on the skin-point source. • To validate the skin-point source, the organ equivalent doses were compared with that the modeling source. • The skin-point source could

  8. FAST AND ROBUST SEGMENTATION AND CLASSIFICATION FOR CHANGE DETECTION IN URBAN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    X. Roynard

    2016-06-01

    Full Text Available Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  9. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    Science.gov (United States)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  10. Analysis of ultrasonically rotating droplet using moving particle semi-implicit and distributed point source methods

    Science.gov (United States)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2016-07-01

    Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.

  11. Lead in the blood of children living close to industrial point sources in Bulgaria and Poland

    Science.gov (United States)

    Willeke-Wetstein, C.; Bainova, A.; Georgieva, R.; Huzior-Balajewicz, A.; Bacon, J. R.

    2003-05-01

    ln Eastern European countries some industrial point sources are still suspected to have unacceptable emission rates of lead that pose a major health risk in particular to children. An interdisciplinary research project under the auspices of the EU had the aims (I) to monitor the current contamination of two industrial zones in Bulgaria and Poland, (2) to relate the Pb levels in ecological strata to the internal exposure of children, (3) to develop public health strategies in order to reduce the health risk by heavy metals. The human monitoring of Pb in Poland did not show increased health risks for the children living in an industrial zone close to Krakow. Bulgarian children, however, exceeded the WHO limit of 100 μg lead per litre blood by over one hundred percent (240 μg/1). Samples of soil, fodder and livestock organs showed elevated concentrations of lead. Recent literature results are compared with the findings in Bulgaria and Poland. The sources of the high internal exposure of children are discussed. Public health strategies to prevent mental dysfunction in Bulgarian children at risk include awareness building and social masures.

  12. Biosolid stockpiles are a significant point source for greenhouse gas emissions.

    Science.gov (United States)

    Majumder, Ramaprasad; Livesley, Stephen J; Gregory, David; Arndt, Stefan K

    2014-10-01

    The wastewater treatment process generates large amounts of sewage sludge that are dried and then often stored in biosolid stockpiles in treatment plants. Because the biosolids are rich in decomposable organic matter they could be a significant source for greenhouse gas (GHG) emissions, yet there are no direct measurements of GHG from stockpiles. We therefore measured the direct emissions of methane (CH4), nitrous oxide (N2O) and carbon dioxide (CO2) on a monthly basis from three different age classes of biosolid stockpiles at the Western Treatment Plant (WTP), Melbourne, Australia, from December 2009 to November 2011 using manual static chambers. All biosolid stockpiles were a significant point source for CH4 and N2O emissions. The youngest biosolids (nitrate and ammonium concentration. We also modeled CH4 emissions based on a first order decay model and the model based estimated annual CH4 emissions were higher as compared to the direct field based estimated annual CH4 emissions. Our results indicate that labile organic material in stockpiles is decomposed over time and that nitrogen decomposition processes lead to significant N2O emissions. Carbon decomposition favors CO2 over CH4 production probably because of aerobic stockpile conditions or CH4 oxidation in the outer stockpile layers. Although the GHG emission rate decreased with biosolid age, managers of biosolid stockpiles should assess alternate storage or uses for biosolids to avoid nutrient losses and GHG emissions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. NOx emissions from large point sources: variability in ozone production, resulting health damages and economic costs

    International Nuclear Information System (INIS)

    Mauzerall, D.L.; Namsoug Kim

    2005-01-01

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O 3 ) produced from nitrogen oxides (NO x =NO+NO 2 ) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NO x emitted from individual sources can have on the downwind concentration of surface O 3 , depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting O 3 -related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used 'cap and trade' approach to NO x regulation, which presumes that shifts of emission over time and space, holding the total fixed over the course of the summer O 3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NO x emissions from one place or time to another could result in large changes in resulting health effects due to O 3 formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NO x emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage. (author)

  14. The energy sources and nuclear energy - The point of view of the Belgian Catholic Church

    International Nuclear Information System (INIS)

    Hoenraet, Christian

    2000-01-01

    The problems related to the environment are reported regularly to the public by means of the newspapers, on radio and television. The story is the product of a journalistic process and in general does not bear much resemblance to the original event. The rate and type of reportage depend not only on the body of data available to the journalist but on the information sources the journalist chosen to use. The same story is reported in a positive or negative way. Finally people are overwhelmed by contradictory information and became uncertain or frightened. In order to provide the general public with objective information about nuclear energy in particular and to made a statement about the position of the Belgian Catholic Church concerning this matter, the results of the study were published in Dutch under the form of a book with the title 'The Energy Sources and Nuclear Energy - Comparative analysis and ethical thoughts written the same author. Thia paper is a short survey of the results of the study and to present the point of view of the Belgian Catholic Church in the energy debate

  15. Eddy covariance methane flux measurements over a grazed pasture: effect of cows as moving point sources

    Science.gov (United States)

    Felber, R.; Münger, A.; Neftel, A.; Ammann, C.

    2015-06-01

    Methane (CH4) from ruminants contributes one-third of global agricultural greenhouse gas emissions. Eddy covariance (EC) technique has been extensively used at various flux sites to investigate carbon dioxide exchange of ecosystems. Since the development of fast CH4 analyzers, the instrumentation at many flux sites has been amended for these gases. However, the application of EC over pastures is challenging due to the spatially and temporally uneven distribution of CH4 point sources induced by the grazing animals. We applied EC measurements during one grazing season over a pasture with 20 dairy cows (mean milk yield: 22.7 kg d-1) managed in a rotational grazing system. Individual cow positions were recorded by GPS trackers to attribute fluxes to animal emissions using a footprint model. Methane fluxes with cows in the footprint were up to 2 orders of magnitude higher than ecosystem fluxes without cows. Mean cow emissions of 423 ± 24 g CH4 head-1 d-1 (best estimate from this study) correspond well to animal respiration chamber measurements reported in the literature. However, a systematic effect of the distance between source and EC tower on cow emissions was found, which is attributed to the analytical footprint model used. We show that the EC method allows one to determine CH4 emissions of cows on a pasture if the data evaluation is adjusted for this purpose and if some cow distribution information is available.

  16. Luminosity distribution in the central regions of Messier 87: Isothermal core, point source, or black hole

    International Nuclear Information System (INIS)

    de Vaucouleurs, G.; Nieto, J.

    1979-01-01

    A combination of photographic and photoelectric photometry with the McDonald 2 m reflector is used to derive a precise mean luminosity profile μ/sub B/(r*) of M87 (jet excluded) at approx.0''.6 resolution out to r*=70''. Within 8'' from the center the luminosity is less than predicted by extrapolation of the r/sup 1/4/ law defined by the main body of the galaxy (8'' 0 =30.5) the structural length of the underlying isothermal is α=2''.78=170 pc, the mass of the ''black hole'' M 0 =1.7.10 9 M/sub sun/ and the luminosity of the point source (B 0 =16.95, M 0 =-13.55) equals 4.2% of the integrated luminosity B (6'') =13.52 of the galaxy within r*=6''. These results agree closely with and confirm the work of the Hale team. Comparison of the McDonald and Hale data suggests that the central source may have been slightly brighter (approx.0.5 mag) in 1964 than in 1975--1977

  17. Point-source and diffuse high-energy neutrino emission from Type IIn supernovae

    Science.gov (United States)

    Petropoulou, M.; Coenders, S.; Vasilopoulos, G.; Kamble, A.; Sironi, L.

    2017-09-01

    Type IIn supernovae (SNe), a rare subclass of core collapse SNe, explode in dense circumstellar media that have been modified by the SNe progenitors at their last evolutionary stages. The interaction of the freely expanding SN ejecta with the circumstellar medium gives rise to a shock wave propagating in the dense SN environment, which may accelerate protons to multi-PeV energies. Inelastic proton-proton collisions between the shock-accelerated protons and those of the circumstellar medium lead to multimessenger signatures. Here, we evaluate the possible neutrino signal of Type IIn SNe and compare with IceCube observations. We employ a Monte Carlo method for the calculation of the diffuse neutrino emission from the SN IIn class to account for the spread in their properties. The cumulative neutrino emission is found to be ˜10 per cent of the observed IceCube neutrino flux above 60 TeV. Type IIn SNe would be the dominant component of the diffuse astrophysical flux, only if 4 per cent of all core collapse SNe were of this type and 20-30 per cent of the shock energy was channeled to accelerated protons. Lower values of the acceleration efficiency are accessible by the observation of a single Type IIn SN as a neutrino point source with IceCube using up-going muon neutrinos. Such an identification is possible in the first year following the SN shock breakout for sources within 20 Mpc.

  18. An ultrabright and monochromatic electron point source made of a LaB6 nanowire

    Science.gov (United States)

    Zhang, Han; Tang, Jie; Yuan, Jinshi; Yamauchi, Yasushi; Suzuki, Taku T.; Shinya, Norio; Nakajima, Kiyomi; Qin, Lu-Chang

    2016-03-01

    Electron sources in the form of one-dimensional nanotubes and nanowires are an essential tool for investigations in a variety of fields, such as X-ray computed tomography, flexible displays, chemical sensors and electron optics applications. However, field emission instability and the need to work under high-vacuum or high-temperature conditions have imposed stringent requirements that are currently limiting the range of application of electron sources. Here we report the fabrication of a LaB6 nanowire with only a few La atoms bonded on the tip that emits collimated electrons from a single point with high monochromaticity. The nanostructured tip has a low work function of 2.07 eV (lower than that of Cs) while remaining chemically inert, two properties usually regarded as mutually exclusive. Installed in a scanning electron microscope (SEM) field emission gun, our tip shows a current density gain that is about 1,000 times greater than that achievable with W(310) tips, and no emission decay for tens of hours of operation. Using this new SEM, we acquired very low-noise, high-resolution images together with rapid chemical compositional mapping using a tip operated at room temperature and at 10-times higher residual gas pressure than that required for W tips.

  19. Non-point source pollution of glyphosate and AMPA in a rural basin from the southeast Pampas, Argentina.

    Science.gov (United States)

    Okada, Elena; Pérez, Débora; De Gerónimo, Eduardo; Aparicio, Virginia; Massone, Héctor; Costa, José Luis

    2018-05-01

    We measured the occurrence and seasonal variations of glyphosate and its metabolite, aminomethylphosphonic acid (AMPA), in different environmental compartments within the limits of an agricultural basin. This topic is of high relevance since glyphosate is the most applied pesticide in agricultural systems worldwide. We were able to quantify the seasonal variations of glyphosate that result mainly from endo-drift inputs, that is, from direct spraying either onto genetically modified (GM) crops (i.e., soybean and maize) or onto weeds in no-till practices. We found that both glyphosate and AMPA accumulate in soil, but the metabolite accumulates to a greater extent due to its higher persistence. Knowing that glyphosate and AMPA were present in soils (> 93% of detection for both compounds), we aimed to study the dispersion to other environmental compartments (surface water, stream sediments, and groundwater), in order to establish the degree of non-point source pollution. Also, we assessed the relationship between the water-table depth and glyphosate and AMPA levels in groundwater. All of the studied compartments had variable levels of glyphosate and AMPA. The highest frequency of detections was found in the stream sediments samples (glyphosate 95%, AMPA 100%), followed by surface water (glyphosate 28%, AMPA 50%) and then groundwater (glyphosate 24%, AMPA 33%). Despite glyphosate being considered a molecule with low vertical mobility in soils, we found that its detection in groundwater was strongly associated with the month where glyphosate concentration in soil was the highest. However, we did not find a direct relation between groundwater table depth and glyphosate or AMPA detections. This is the first simultaneous study of glyphosate and AMPA seasonal variations in soil, groundwater, surface water, and sediments within a rural basin.

  20. Using sorbent waste materials to enhance treatment of micro-point source effluents by constructed wetlands

    Science.gov (United States)

    Green, Verity; Surridge, Ben; Quinton, John; Matthews, Mike

    2014-05-01

    Sorbent materials are widely used in environmental settings as a means of enhancing pollution remediation. A key area of environmental concern is that of water pollution, including the need to treat micro-point sources of wastewater pollution, such as from caravan sites or visitor centres. Constructed wetlands (CWs) represent one means for effective treatment of wastewater from small wastewater producers, in part because they are believed to be economically viable and environmentally sustainable. Constructed wetlands have the potential to remove a range of pollutants found in wastewater, including nitrogen (N), phosphorus (P), biochemical oxygen demand (BOD) and carbon (C), whilst also reducing the total suspended solids (TSS) concentration in effluents. However, there remain particular challenges for P and N removal from wastewater in CWs, as well as the sometimes limited BOD removal within these treatment systems, particularly for micro-point sources of wastewater. It has been hypothesised that the amendment of CWs with sorbent materials can enhance their potential to treat wastewater, particularly through enhancing the removal of N and P. This paper focuses on data from batch and mesocosm studies that were conducted to identify and assess sorbent materials suitable for use within CWs. The aim in using sorbent material was to enhance the combined removal of phosphate (PO4-P) and ammonium (NH4-N). The key selection criteria for the sorbent materials were that they possess effective PO4-P, NH4-N or combined pollutant removal, come from low cost and sustainable sources, have potential for reuse, for example as a fertiliser or soil conditioner, and show limited potential for re-release of adsorbed nutrients. The sorbent materials selected for testing were alum sludge from water treatment works, ochre derived from minewater treatment, biochar derived from various feedstocks, plasterboard and zeolite. The performance of the individual sorbents was assessed through

  1. DEVELOPMENT OF THE MODEL OF GALACTIC INTERSTELLAR EMISSION FOR STANDARD POINT-SOURCE ANALYSIS OF FERMI LARGE AREA TELESCOPE DATA

    Energy Technology Data Exchange (ETDEWEB)

    Acero, F.; Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Université Paris Diderot, Service d’Astrophysique, CEA Saclay, F-91191 Gif sur Yvette (France); Ackermann, M.; Buehler, R. [Deutsches Elektronen Synchrotron DESY, D-15738 Zeuthen (Germany); Ajello, M. [Department of Physics and Astronomy, Clemson University, Kinard Lab of Physics, Clemson, SC 29634-0978 (United States); Albert, A.; Baldini, L.; Bloom, E. D.; Bottacini, E.; Caliandro, G. A.; Cameron, R. A. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bellazzini, R. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bissaldi, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Bonino, R. [Istituto Nazionale di Fisica Nucleare, Sezione di Torino, I-10125 Torino (Italy); Brandt, T. J.; Buson, S. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bregeon, J. [Laboratoire Univers et Particules de Montpellier, Université Montpellier, CNRS/IN2P3, Montpellier (France); Bruel, P., E-mail: isabelle.grenier@cea.fr, E-mail: casandjian@cea.fr [Laboratoire Leprince-Ringuet, École polytechnique, CNRS/IN2P3, Palaiseau (France); and others

    2016-04-01

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM), which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within ∼4° of the Galactic Center.

  2. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  3. FAST OCCLUSION AND SHADOW DETECTION FOR HIGH RESOLUTION REMOTE SENSING IMAGE COMBINED WITH LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    X. Hu

    2012-08-01

    Full Text Available The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU is much more powerful than central processing unit (CPU. We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  4. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    Science.gov (United States)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  5. Test-retest reliability of myofascial trigger point detection in hip and thigh areas.

    Science.gov (United States)

    Rozenfeld, E; Finestone, A S; Moran, U; Damri, E; Kalichman, L

    2017-10-01

    Myofascial trigger points (MTrP's) are a primary source of pain in patients with musculoskeletal disorders. Nevertheless, they are frequently underdiagnosed. Reliable MTrP palpation is the necessary for their diagnosis and treatment. The few studies that have looked for intra-tester reliability of MTrPs detection in upper body, provide preliminary evidence that MTrP palpation is reliable. Reliability tests for MTrP palpation on the lower limb have not yet been performed. To evaluate inter- and intra-tester reliability of MTrP recognition in hip and thigh muscles. Reliability study. 21 patients (15 males and 6 females, mean age 21.1 years) referred to the physical therapy clinic, 10 with knee or hip pain and 11 with pain in an upper limb, low back, shin or ankle. Two experienced physical therapists performed the examinations, blinded to the subjects' identity, medical condition and results of the previous MTrP evaluation. Each subject was evaluated four times, twice by each examiner in a random order. Dichotomous findings included a palpable taut band, tenderness, referred pain, and relevance of referred pain to patient's complaint. Based on these, diagnosis of latent MTrP's or active MTrP's was established. The evaluation was performed on both legs and included a total of 16 locations in the following muscles: rectus femoris (proximal), vastus medialis (middle and distal), vastus lateralis (middle and distal) and gluteus medius (anterior, posterior and distal). Inter- and intra-tester reliability (Cohen's kappa (κ)) values for single sites ranged from -0.25 to 0.77. Median intra-tester reliability was 0.45 and 0.46 for latent and active MTrP's, and median inter-tester reliability was 0.51 and 0.64 for latent and active MTrPs, respectively. The examination of the distal vastus medialis was most reliable for latent and active MTrP's (intra-tester k = 0.27-0.77, inter-tester k = 0.77 and intra-tester k = 0.53-0.72, inter-tester k = 0.72, correspondingly

  6. The recovery of a time-dependent point source in a linear transport equation: application to surface water pollution

    International Nuclear Information System (INIS)

    Hamdi, Adel

    2009-01-01

    The aim of this paper is to localize the position of a point source and recover the history of its time-dependent intensity function that is both unknown and constitutes the right-hand side of a 1D linear transport equation. Assuming that the source intensity function vanishes before reaching the final control time, we prove that recording the state with respect to the time at two observation points framing the source region leads to the identification of the source position and the recovery of its intensity function in a unique manner. Note that at least one of the two observation points should be strategic. We establish an identification method that determines quasi-explicitly the source position and transforms the task of recovering its intensity function into solving directly a well-conditioned linear system. Some numerical experiments done on a variant of the water pollution BOD model are presented

  7. Detecting outliers and/or leverage points: a robust two-stage procedure with bootstrap cut-off points

    Directory of Open Access Journals (Sweden)

    Ettore Marubini

    2014-01-01

    Full Text Available This paper presents a robust two-stage procedure for identification of outlying observations in regression analysis. The exploratory stage identifies leverage points and vertical outliers through a robust distance estimator based on Minimum Covariance Determinant (MCD. After deletion of these points, the confirmatory stage carries out an Ordinary Least Squares (OLS analysis on the remaining subset of data and investigates the effect of adding back in the previously deleted observations. Cut-off points pertinent to different diagnostics are generated by bootstrapping and the cases are definitely labelled as good-leverage, bad-leverage, vertical outliers and typical cases. The procedure is applied to four examples.

  8. PointFinder: a novel web tool for WGS-based detection of antimicrobial resistance associated with chromosomal point mutations in bacterial pathogens

    DEFF Research Database (Denmark)

    Zankari, Ea; Allesøe, Rosa Lundbye; Joensen, Katrine Grimstrup

    2017-01-01

    enterica, Escherichia coli and Campylobacter jejuni. The web-server ResFinder-2.1 was used to identify acquired antimicrobial resistance genes and two methods, the novel PointFinder (using BLAST) and an in-house method (mapping of raw WGS reads), were used to identify chromosomal point mutations. Results...... or when mapping the reads. Conclusions PointFinder proved, with high concordance between phenotypic and predicted antimicrobial susceptibility, to be a user-friendly web tool for detection of chromosomal point mutations associated with antimicrobial resistance....

  9. Test sensitivity is important for detecting variability in pointing comprehension in canines.

    Science.gov (United States)

    Pongrácz, Péter; Gácsi, Márta; Hegedüs, Dorottya; Péter, András; Miklósi, Adám

    2013-09-01

    Several articles have been recently published on dogs' (Canis familiaris) performance in two-way object choice experiments in which subjects had to find hidden food by utilizing human pointing. The interpretation of results has led to a vivid theoretical debate about the cognitive background of human gestural signal understanding in dogs, despite the fact that many important details of the testing method have not yet been standardized. We report three experiments that aim to reveal how some procedural differences influence adult companion dogs' performance in these tests. Utilizing a large sample in Experiment 1, we provide evidence that neither the keeping conditions (garden/house) nor the location of the testing (outdoor/indoor) affect a dogs' performance. In Experiment 2, we compare dogs' performance using three different types of pointing gestures. Dogs' performance varied between momentary distal and momentary cross-pointing but "low" and "high" performer dogs chose uniformly better than chance level if they responded to sustained pointing gestures with reinforcement (food reward and a clicking sound; "clicker pointing"). In Experiment 3, we show that single features of the aforementioned "clicker pointing" method can slightly improve dogs' success rate if they were added one by one to the momentary distal pointing method. These results provide evidence that although companion dogs show a robust performance at different testing locations regardless of their keeping conditions, the exact execution of the human gesture and additional reinforcement techniques have substantial effect on the outcomes. Consequently, researchers should standardize their methodology before engaging in debates on the comparative aspects of socio-cognitive skills because the procedures they utilize may differ in sensitivity for detecting differences.

  10. THE 31 DEG{sup 2} RELEASE OF THE STRIPE 82 X-RAY SURVEY: THE POINT SOURCE CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    LaMassa, Stephanie M.; Urry, C. Megan; Ananna, Tonima; Civano, Francesca; Marchesi, Stefano; Pecoraro, Robert [Yale Center for Astronomy and Astrophysics, Physics Department, P.O. Box 208120, New Haven, CT 06520 (United States); Cappelluti, Nico; Comastri, Andrea; Brusa, Marcella [INAF-Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Böhringer, Hans; Chon, Gayoung [Max-Planck-Institut für extraterrestrische Physik, D-85748 Garching (Germany); Glikman, Eilat [Department of Physics, Middlebury College, Middlebury, VT 05753 (United States); Richards, Gordon [Department of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104 (United States); Cardamone, Carie [Department of Math and Science, Wheelock College, 200 Riverway, Boston, MA 02215 (United States); Farrah, Duncan [Department of Physics MC 0435, Virginia Polytechnic Institute and State University, 850 West Campus Drive, Blacksburg, VA 24061 (United States); Gilfanov, Marat [Max-Planck Institut für Astrophysik, Karl-Schwarzschild-Str. 1, Postfach 1317, D-85741 Garching (Germany); Green, Paul [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Komossa, S. [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Lira, Paulina [Departamento de Astronomia, Universidad de Chile, Camino del Observatorio 1515, Santiago (Chile); Makler, Martin [Centro Brasileiro de Pesquisas Fisicas, Rua Dr Xavier Sigaud 150, Rio de Janeiro, RJ 22290-180 (Brazil); and others

    2016-02-01

    We release the next installment of the Stripe 82 X-ray survey point-source catalog, which currently covers 31.3 deg{sup 2} of the Sloan Digital Sky Survey (SDSS) Stripe 82 Legacy field. In total, 6181 unique X-ray sources are significantly detected with XMM-Newton (>5σ) and Chandra (>4.5σ). This catalog release includes data from XMM-Newton cycle AO 13, which approximately doubled the Stripe 82X survey area. The flux limits of the Stripe 82X survey are 8.7 × 10{sup −16} erg s{sup −1} cm{sup −2}, 4.7 × 10{sup −15} erg s{sup −1} cm{sup −2}, and 2.1 × 10{sup −15} erg s{sup −1} cm{sup −2} in the soft (0.5–2 keV), hard (2–10 keV), and full bands (0.5–10 keV), respectively, with approximate half-area survey flux limits of 5.4 × 10{sup −15} erg s{sup −1} cm{sup −2}, 2.9 × 10{sup −14} erg s{sup −1} cm{sup −2}, and 1.7 × 10{sup −14} erg s{sup −1} cm{sup −2}. We matched the X-ray source lists to available multi-wavelength catalogs, including updated matches to the previous release of the Stripe 82X survey; 88% of the sample is matched to a multi-wavelength counterpart. Due to the wide area of Stripe 82X and rich ancillary multi-wavelength data, including coadded SDSS photometry, mid-infrared WISE coverage, near-infrared coverage from UKIDSS and VISTA Hemisphere Survey, ultraviolet coverage from GALEX, radio coverage from FIRST, and far-infrared coverage from Herschel, as well as existing ∼30% optical spectroscopic completeness, we are beginning to uncover rare objects, such as obscured high-luminosity active galactic nuclei at high-redshift. The Stripe 82X point source catalog is a valuable data set for constraining how this population grows and evolves, as well as for studying how they interact with the galaxies in which they live.

  11. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    Science.gov (United States)

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  12. Generating Impact Maps from Automatically Detected Bomb Craters in Aerial Wartime Images Using Marked Point Processes

    Science.gov (United States)

    Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian

    2018-04-01

    The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.

  13. Glue detection based on teaching points constraint and tracking model of pixel convolution

    Science.gov (United States)

    Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen

    2018-01-01

    On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.

  14. An Instantaneous Low-Cost Point-of-Care Anemia Detection Device

    Directory of Open Access Journals (Sweden)

    Jaime Punter-Villagrasa

    2015-02-01

    Full Text Available We present a small, compact and portable device for point-of-care instantaneous early detection of anemia. The method used is based on direct hematocrit measurement from whole blood samples by means of impedance analysis. This device consists of a custom electronic instrumentation and a plug-and-play disposable sensor. The designed electronics rely on straightforward standards for low power consumption, resulting in a robust and low consumption device making it completely mobile with a long battery life. Another approach could be powering the system based on other solutions like indoor solar cells, or applying energy-harvesting solutions in order to remove the batteries. The sensing system is based on a disposable low-cost label-free three gold electrode commercial sensor for 50 µL blood samples. The device capability for anemia detection has been validated through 24 blood samples, obtained from four hospitalized patients at Hospital Clínic. As a result, the response, effectiveness and robustness of the portable point-of-care device to detect anemia has been proved with an accuracy error of 2.83% and a mean coefficient of variation of 2.57% without any particular case above 5%.

  15. A Numerical Study on the Excitation of Guided Waves in Rectangular Plates Using Multiple Point Sources

    Directory of Open Access Journals (Sweden)

    Wenbo Duan

    2017-12-01

    Full Text Available Ultrasonic guided waves are widely used to inspect and monitor the structural integrity of plates and plate-like structures, such as ship hulls and large storage-tank floors. Recently, ultrasonic guided waves have also been used to remove ice and fouling from ship hulls, wind-turbine blades and aeroplane wings. In these applications, the strength of the sound source must be high for scanning a large area, or to break the bond between ice, fouling and plate substrate. More than one transducer may be used to achieve maximum sound power output. However, multiple sources can interact with each other, and form a sound field in the structure with local constructive and destructive regions. Destructive regions are weak regions and shall be avoided. When multiple transducers are used it is important that they are arranged in a particular way so that the desired wave modes can be excited to cover the whole structure. The objective of this paper is to provide a theoretical basis for generating particular wave mode patterns in finite-width rectangular plates whose length is assumed to be infinitely long with respect to its width and thickness. The wave modes have displacements in both width and thickness directions, and are thus different from the classical Lamb-type wave modes. A two-dimensional semi-analytical finite element (SAFE method was used to study dispersion characteristics and mode shapes in the plate up to ultrasonic frequencies. The modal analysis provided information on the generation of modes suitable for a particular application. The number of point sources and direction of loading for the excitation of a few representative modes was investigated. Based on the SAFE analysis, a standard finite element modelling package, Abaqus, was used to excite the designed modes in a three-dimensional plate. The generated wave patterns in Abaqus were then compared with mode shapes predicted in the SAFE model. Good agreement was observed between the

  16. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults.

    Science.gov (United States)

    Ouwehand, Kim; van Gog, Tamara; Paas, Fred

    2016-10-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.

  17. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    Science.gov (United States)

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  18. Experimental properties of gluon and quark jets from a point source

    CERN Document Server

    Abbiendi, G.; Alexander, G.; Allison, John; Altekamp, N.; Anderson, K.J.; Anderson, S.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Batley, J.R.; Baumann, S.; Bechtluft, J.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Betts, S.; Biebel, O.; Biguzzi, A.; Blobel, V.; Bloodworth, I.J.; Bock, P.; Bohme, J.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Conboy, J.E.; Cooke, O.C.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Davis, R.; De Jong, S.; de Roeck, A.; Dervan, P.; Desch, K.; Dienes, B.; Dixit, M.S.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Estabrooks, P.G.; Etzion, E.; Fabbri, F.; Fanfani, A.; Fanti, M.; Faust, A.A.; Fiedler, F.; Fierro, M.; Fleck, I.; Folman, R.; Frey, A.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gascon, J.; Gascon-Shotkin, S.M.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gibson, V.; Gibson, W.R.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Graham, K.; Gross, E.; Grunhaus, J.; Gruwe, M.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Harel, A.; Hargrove, C.K.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Herndon, M.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hobson, P.R.; Hoch, M.; Hocker, James Andrew; Hoffman, Kara Dion; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Kanzaki, J.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klier, A.; Kobayashi, T.; Kobel, M.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kyberd, P.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Lauber, J.; Lautenschlager, S.R.; Lawson, I.; Layter, J.G.; Lee, A.M.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Lu, J.; Ludwig, J.; Lui, D.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Markopoulos, C.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; Mckigney, E.A.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menke, S.; Merritt, F.S.; Mes, H.; Meyer, J.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mir, R.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oreglia, M.J.; Orito, S.; Palinkas, J.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Petzold, S.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poffenberger, P.; Poli, B.; Polok, J.; Przybycien, M.; Rembser, C.; Rick, H.; Robertson, S.; Robins, S.A.; Rodning, N.; Roney, J.M.; Rosati, S.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Sittler, A.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Spagnolo, S.; Sproston, M.; Stahl, A.; Stephens, K.; Steuerer, J.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Taras, P.; Tarem, S.; Teuscher, R.; Thiergen, M.; Thomas, J.; Thomson, M.A.; Torrence, E.; Towers, S.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turcot, A.S.; Turner-Watson, M.F.; Ueda, I.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Wackerle, F.; Wagner, A.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wermes, N.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Yekutieli, G.; Zacek, V.; Zer-Zion, D.

    1999-01-01

    Gluon jets are identified in hadronic Z0 decays as all the particles in a hemisphere opposite to a hemisphere containing two tagged quark jets. Gluon jets defined in this manner are equivalent to gluon jets produced from a color singlet point source and thus correspond to the definition employed for most theoretical calculations. In a separate stage of the analysis, we select quark jets in a manner to correspond to calculations, as the particles in hemispheres of flavor tagged light quark (uds) events. We present the distributions of rapidity, scaled energy, the logarithm of the momentum, and transverse momentum with respect to the jet axes, for charged particles in these gluon and quark jets. We also examine the charged particle multiplicity distributions of the jets in restricted intervals of rapidity. For soft particles at large transverse momentum, we observe the charged particle multiplicity ratio of gluon to quark jets to be 2.29 +- 0.09 +- 0.15 in agreement with the prediction that this ratio should ap...

  19. Novel Remarks on Point Mass Sources, Firewalls, Null Singularities and Gravitational Entropy

    Science.gov (United States)

    Perelman, Carlos Castro

    2016-01-01

    A continuous family of static spherically symmetric solutions of Einstein's vacuum field equations with a spatial singularity at the origin r = 0 is found. These solutions are parametrized by a real valued parameter λ (ranging from 0 to 1) and such that the radial horizon's location is displaced continuously towards the singularity ( r = 0 ) as λ increases. In the extreme limit λ = 1, the location of the singularity and horizon merges leading to a null singularity. In this extreme case, any infalling observer hits the null singularity at the very moment he/she crosses the horizon. This fact may have important consequences for the resolution of the fire wall problem and the complementarity controversy in black holes. An heuristic argument is provided how one might avoid the Hawking particle emission process in this extreme case when the singularity and horizon merges. The field equations due to a delta-function point-mass source at r = 0 are solved and the Euclidean gravitational action corresponding to those solutions is evaluated explicitly. It is found that the Euclidean action is precisely equal to the black hole entropy (in Planck area units). This result holds in any dimensions D ≥ 3.

  20. Application of distributed point source method (DPSM) to wave propagation in anisotropic media

    Science.gov (United States)

    Fooladi, Samaneh; Kundu, Tribikram

    2017-04-01

    Distributed Point Source Method (DPSM) was developed by Placko and Kundu1, as a technique for modeling electromagnetic and elastic wave propagation problems. DPSM has been used for modeling ultrasonic, electrostatic and electromagnetic fields scattered by defects and anomalies in a structure. The modeling of such scattered field helps to extract valuable information about the location and type of defects. Therefore, DPSM can be used as an effective tool for Non-Destructive Testing (NDT). Anisotropy adds to the complexity of the problem, both mathematically and computationally. Computation of the Green's function which is used as the fundamental solution in DPSM is considerably more challenging for anisotropic media, and it cannot be reduced to a closed-form solution as is done for isotropic materials. The purpose of this study is to investigate and implement DPSM for an anisotropic medium. While the mathematical formulation and the numerical algorithm will be considered for general anisotropic media, more emphasis will be placed on transversely isotropic materials in the numerical example presented in this paper. The unidirectional fiber-reinforced composites which are widely used in today's industry are good examples of transversely isotropic materials. Development of an effective and accurate NDT method based on these modeling results can be of paramount importance for in-service monitoring of damage in composite structures.

  1. Economic-environmental modeling of point source pollution in Jefferson County, Alabama, USA.

    Science.gov (United States)

    Kebede, Ellene; Schreiner, Dean F; Huluka, Gobena

    2002-05-01

    This paper uses an integrated economic-environmental model to assess the point source pollution from major industries in Jefferson County, Northern Alabama. Industrial expansion generates employment, income, and tax revenue for the public sector; however, it is also often associated with the discharge of chemical pollutants. Jefferson County is one of the largest industrial counties in Alabama that experienced smog warnings and ambient ozone concentration, 1996-1999. Past studies of chemical discharge from industries have used models to assess the pollution impact of individual plants. This study, however, uses an extended Input-Output (I-O) economic model with pollution emission coefficients to assess direct and indirect pollutant emission for several major industries in Jefferson County. The major findings of the study are: (a) the principal emission by the selected industries are volatile organic compounds (VOC) and these contribute to the ambient ozone concentration; (b) the direct and indirect emissions are significantly higher than the direct emission by some industries, indicating that an isolated analysis will underestimate the emission by an industry; (c) while low emission coefficient industries may suggest industry choice they may also emit the most hazardous chemicals. This study is limited by the assumptions made, and the data availability, however it provides a useful analytical tool for direct and cumulative emission estimation and generates insights on the complexity in choice of industries.

  2. Science, information, technology, and the changing character of public policy in non-point source pollution

    Science.gov (United States)

    King, John L.; Corwin, Dennis L.

    Information technologies are already delivering important new capabilities for scientists working on non-point source (NPS) pollution in the vadose zone, and more are expected. This paper focuses on the special contributions of modeling and network communications for enhancing the effectiveness of scientists in the realm of policy debates regarding NPS pollution mitigation and abatement. The discussion examines a fundamental shift from a strict regulatory strategy of pollution control characterized by a bureaucratic/technical alliance during the period through the 1970's and early 1980's, to a more recently evolving paradigm of pluralistic environmental management. The role of science and scientists in this shift is explored, with special attention to the challenges facing scientists working in NPS pollution in the vadose zone. These scientists labor under a special handicap in the evolving model because their scientific tools are often times incapable of linking NPS pollution with individuals responsible for causing it. Information can facilitate the effectiveness of these scientists in policy debates, but not under the usual assumptions in which scientific truth prevails. Instead, information technology's key role is in helping scientists shape the evolving discussion of trade-offs and in bringing citizens and policymakers closer to the routine work of scientists.

  3. Using a dynamic point-source percolation model to simulate bubble growth

    International Nuclear Information System (INIS)

    Zimmerman, Jonathan A.; Zeigler, David A.; Cowgill, Donald F.

    2004-01-01

    Accurate modeling of nucleation, growth and clustering of helium bubbles within metal tritide alloys is of high scientific and technological importance. Of interest is the ability to predict both the distribution of these bubbles and the manner in which these bubbles interact at a critical concentration of helium-to-metal atoms to produce an accelerated release of helium gas. One technique that has been used in the past to model these materials, and again revisited in this research, is percolation theory. Previous efforts have used classical percolation theory to qualitatively and quantitatively model the behavior of interstitial helium atoms in a metal tritide lattice; however, higher fidelity models are needed to predict the distribution of helium bubbles and include features that capture the underlying physical mechanisms present in these materials. In this work, we enhance classical percolation theory by developing the dynamic point-source percolation model. This model alters the traditionally binary character of site occupation probabilities by enabling them to vary depending on proximity to existing occupied sites, i.e. nucleated bubbles. This revised model produces characteristics for one and two dimensional systems that are extremely comparable with measurements from three dimensional physical samples. Future directions for continued development of the dynamic model are also outlined

  4. INTERSECTION DETECTION BASED ON QUALITATIVE SPATIAL REASONING ON STOPPING POINT CLUSTERS

    Directory of Open Access Journals (Sweden)

    S. Zourlidou

    2016-06-01

    Full Text Available The purpose of this research is to propose and test a method for detecting intersections by analysing collectively acquired trajectories of moving vehicles. Instead of solely relying on the geometric features of the trajectories, such as heading changes, which may indicate turning points and consequently intersections, we extract semantic features of the trajectories in form of sequences of stops and moves. Under this spatiotemporal prism, the extracted semantic information which indicates where vehicles stop can reveal important locations, such as junctions. The advantage of the proposed approach in comparison with existing turning-points oriented approaches is that it can detect intersections even when not all the crossing road segments are sampled and therefore no turning points are observed in the trajectories. The challenge with this approach is that first of all, not all vehicles stop at the same location – thus, the stop-location is blurred along the direction of the road; this, secondly, leads to the effect that nearby junctions can induce similar stop-locations. As a first step, a density-based clustering is applied on the layer of stop observations and clusters of stop events are found. Representative points of the clusters are determined (one per cluster and in a last step the existence of an intersection is clarified based on spatial relational cluster reasoning, with which less informative geospatial clusters, in terms of whether a junction exists and where its centre lies, are transformed in more informative ones. Relational reasoning criteria, based on the relative orientation of the clusters with their adjacent ones are discussed for making sense of the relation that connects them, and finally for forming groups of stop events that belong to the same junction.

  5. People Detection Based on Spatial Mapping of Friendliness and Floor Boundary Points for a Mobile Navigation Robot

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Tasaki

    2011-01-01

    Full Text Available Navigation robots must single out partners requiring navigation and move in the cluttered environment where people walk around. Developing such robots requires two different people detections: detecting partners and detecting all moving people around the robots. For detecting partners, we design divided spaces based on the spatial relationships and sensing ranges. Mapping the friendliness of each divided space based on the stimulus from the multiple sensors to detect people calling robots positively, robots detect partners on the highest friendliness space. For detecting moving people, we regard objects’ floor boundary points in an omnidirectional image as obstacles. We classify obstacles as moving people by comparing movement of each point with robot movement using odometry data, dynamically changing thresholds to detect. Our robot detected 95.0% of partners while it stands by and interacts with people and detected 85.0% of moving people while robot moves, which was four times higher than previous methods did.

  6. Potentiometric end point detection in the EDTA titrimetric determination of gallium

    International Nuclear Information System (INIS)

    Gopinath, N.; Renuka, M.; Aggarwal, S.K.

    2001-01-01

    Gallium is titrated in presence of known amount of Fe (III) with EDTA in HNO 3 solution at pH 2 to 3. The end point is detected potentiometrically employing a bright platinum wire - saturated calomel (SCE) reference electrode system, the redox couple being Fe (III) / Fe (II). Since Fe (III) is also titrated by EDTA, it is, therefore, subtracted from titre value to get the EDTA equivalent to gallium only. Precision and accuracy 0.2 to 0.4% was obtained in the results of gallium in the range of 8 to 2 mg. (author)

  7. CANDU pressure tube leak detection by annulus gas dew point measurement. A critical review

    Energy Technology Data Exchange (ETDEWEB)

    Greening, F.R. [CTS-NA, Tiverton, ON (Canada)

    2017-03-15

    In the event of a pressure tube leak from a small through-wall crack during CANDU reactor operations, there is a regulatory requirement - referred to as Leak Before Break (LBB) - for the licensee to demonstrate that there will be sufficient time for the leak to be detected and the reactor shut down before the crack grows to the critical size for fast-uncontrolled rupture. In all currently operating CANDU reactors, worldwide, this LBB requirement is met via continuous dew point measurements of the CO{sub 2} gas circulating in the reactor's Annulus Gas System (AGS). In this paper the historical development and current status of this leak detection capability is reviewed and the use of moisture injection tests as a verification procedure is critiqued. It is concluded that these tests do not represent AGS conditions that are to be expected in the event of a real pressure tube leak.

  8. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions

    Science.gov (United States)

    Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang

    2018-05-01

    Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.

  9. CANDU pressure tube leak detection by annulus gas dew point measurement. A critical review

    International Nuclear Information System (INIS)

    Greening, F.R.

    2017-01-01

    In the event of a pressure tube leak from a small through-wall crack during CANDU reactor operations, there is a regulatory requirement - referred to as Leak Before Break (LBB) - for the licensee to demonstrate that there will be sufficient time for the leak to be detected and the reactor shut down before the crack grows to the critical size for fast-uncontrolled rupture. In all currently operating CANDU reactors, worldwide, this LBB requirement is met via continuous dew point measurements of the CO_2 gas circulating in the reactor's Annulus Gas System (AGS). In this paper the historical development and current status of this leak detection capability is reviewed and the use of moisture injection tests as a verification procedure is critiqued. It is concluded that these tests do not represent AGS conditions that are to be expected in the event of a real pressure tube leak.

  10. Towards Detection and Diagnosis of Ebola Virus Disease at Point-of-Care

    Science.gov (United States)

    Kaushik, Ajeet; Tiwari, Sneham; Jayant, Rahul Dev; Marty, Aileen; Nair, Madhavan

    2015-01-01

    Ebola outbreak-2014 (mainly Zaire strain related Ebola virus) has been declared most widely spread deadly persistent epidemic due to unavailability of rapid diagnostic, detection, and therapeutics. Ebola virus disease (EVD), a severe viral hemorrhagic fever syndrome caused by Ebola virus (EBOV) is transmitted by direct contact with the body fluids of infected person and objects contaminated with virus or infected animals. World Health Organization (WHO) has declared EVD epidemic as public health emergency of international concern with severe global economic burden. At fatal EBOV infection stage, patients usually die before the antibody response. Currently, rapid blood tests to diagnose EBOV infection include the antigen or antibodies capture using ELISA and RNA detection using RT/Q-PCR within 3–10 days after the onset of symptoms. Moreover, few nanotechnology-based colorimetric and paper-based immunoassay methods have been recently reported to detect Ebola virus. Unfortunately, these methods are limited to laboratory only. As state-of-the art (SoA) diagnostics time to confirm Ebola infection, varies from 6 hours to about 3 days, it causes delay in therapeutic approaches. Thus developing a cost-effective, rapid, sensitive, and selective sensor to detect EVD at point-of-care (POC) is certainly worth exploring to establish rapid diagnostics to decide therapeutics. This review highlights SoA of Ebola diagnostics and also a call to develop rapid, selective and sensitive POC detection of EBOV for global health care. We propose that adopting miniaturized electrochemical EBOV immunosensing can detect virus level at pM concentration within ~40 minute compared to 3 days of ELISA test at nM levels. PMID:26319169

  11. Five-Level Z-Source Neutral Point-Clamped Inverter

    DEFF Research Database (Denmark)

    Gao, F.; Loh, P.C.; Blaabjerg, Frede

    2007-01-01

    This paper proposes a five-level Z-source neutralpoint- clamped (NPC) inverter with two Z-source networks functioning as intermediate energy storages coupled between dc sources and NPC inverter circuitry. Analyzing the operational principles of Z-source network with partial dclink shoot......-through scheme reveals the hidden theories in the five-level Z-source NPC inverter unlike the operational principle appeared in the general two-level Z-source inverter, so that the five-level Z-source NPC inverter can be designed with the modulation of carrier-based phase disposition (PD) or alternative phase...

  12. Event-based motion correction for PET transmission measurements with a rotating point source

    International Nuclear Information System (INIS)

    Zhou, Victor W; Kyme, Andre Z; Meikle, Steven R; Fulton, Roger

    2011-01-01

    Accurate attenuation correction is important for quantitative positron emission tomography (PET) studies. When performing transmission measurements using an external rotating radioactive source, object motion during the transmission scan can distort the attenuation correction factors computed as the ratio of the blank to transmission counts, and cause errors and artefacts in reconstructed PET images. In this paper we report a compensation method for rigid body motion during PET transmission measurements, in which list mode transmission data are motion corrected event-by-event, based on known motion, to ensure that all events which traverse the same path through the object are recorded on a common line of response (LOR). As a result, the motion-corrected transmission LOR may record a combination of events originally detected on different LORs. To ensure that the corresponding blank LOR records events from the same combination of contributing LORs, the list mode blank data are spatially transformed event-by-event based on the same motion information. The number of counts recorded on the resulting blank LOR is then equivalent to the number of counts that would have been recorded on the corresponding motion-corrected transmission LOR in the absence of any attenuating object. The proposed method has been verified in phantom studies with both stepwise movements and continuous motion. We found that attenuation maps derived from motion-corrected transmission and blank data agree well with those of the stationary phantom and are significantly better than uncorrected attenuation data.

  13. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    Science.gov (United States)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  14. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay.

    Science.gov (United States)

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter; Troldborg, Carl Peder; Fodgaard, Henrik; van Os, Sylvana; Pedersen, Christian

    2017-08-01

    High-sensitivity cardiac troponin assay development enables determination of biological variation in healthy populations, more accurate interpretation of clinical results and points towards earlier diagnosis and rule-out of acute myocardial infarction. In this paper, we report on preliminary tests of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by factor of 5 for standard troponin I and by factor of 3 for a research high-sensitivity troponin I assay, compared to the flash lamp excitation. The obtained limit of detection was 0.22 ng/L measured on plasma with the research high-sensitivity troponin I assay and 1.9 ng/L measured on tris-saline-azide buffer containing bovine serum albumin with the standard troponin I assay. We discuss the optimization of time-resolved detection of lanthanide fluorescence based on the time constants of the system and analyze the background and noise sources in a heterogeneous fluoroimmunoassay. We determine the limiting factors and their impact on the measurement performance. The suggested model can be generally applied to fluoroimmunoassays employing the dry-cup concept.

  15. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  16. Exact analytical solution of time-independent neutron transport equation, and its applications to systems with a point source

    International Nuclear Information System (INIS)

    Mikata, Y.

    2014-01-01

    Highlights: • An exact solution for the one-speed neutron transport equation is obtained. • This solution as well as its derivation are believed to be new. • Neutron flux for a purely absorbing material with a point neutron source off the origin is obtained. • Spherically as well as cylindrically piecewise constant cross sections are studied. • Neutron flux expressions for a point neutron source off the origin are believed to be new. - Abstract: An exact analytical solution of the time-independent monoenergetic neutron transport equation is obtained in this paper. The solution is applied to systems with a point source. Systematic analysis of the solution of the time-independent neutron transport equation, and its applications represent the primary goal of this paper. To the best of the author’s knowledge, certain key results on the scalar neutron flux as well as their derivations are new. As an application of these results, a scalar neutron flux for a purely absorbing medium with a spherically piecewise constant cross section and an isotropic point neutron source off the origin as well as that for a cylindrically piecewise constant cross section with a point neutron source off the origin are obtained. Both of these results are believed to be new

  17. The Treatment Train approach to reducing non-point source pollution from agriculture

    Science.gov (United States)

    Barber, N.; Reaney, S. M.; Barker, P. A.; Benskin, C.; Burke, S.; Cleasby, W.; Haygarth, P.; Jonczyk, J. C.; Owen, G. J.; Snell, M. A.; Surridge, B.; Quinn, P. F.

    2016-12-01

    An experimental approach has been applied to an agricultural catchment in NW England, where non-point pollution adversely affects freshwater ecology. The aim of the work (as part of the River Eden Demonstration Test Catchment project) is to develop techniques to manage agricultural runoff whilst maintaining food production. The approach used is the Treatment Train (TT), which applies multiple connected mitigation options that control nutrient and fine sediment pollution at source, and address polluted runoff pathways at increasing spatial scale. The principal agricultural practices in the study sub-catchment (1.5 km2) are dairy and stock production. Farm yards can act as significant pollution sources by housing large numbers of animals; these areas are addressed initially with infrastructure improvements e.g. clean/dirty water separation and upgraded waste storage. In-stream high resolution monitoring of hydrology and water quality parameters showed high-discharge events to account for the majority of pollutant exports ( 80% total phosphorus; 95% fine sediment), and primary transfer routes to be surface and shallow sub-surface flow pathways, including drains. To manage these pathways and reduce hydrological connectivity, a series of mitigation features were constructed to intercept and temporarily store runoff. Farm tracks, field drains, first order ditches and overland flow pathways were all targeted. The efficacy of the mitigation features has been monitored at event and annual scale, using inflow-outflow sampling and sediment/nutrient accumulation measurements, respectively. Data presented here show varied but positive results in terms of reducing acute and chronic sediment and nutrient losses. An aerial fly-through of the catchment is used to demonstrate how the TT has been applied to a fully-functioning agricultural landscape. The elevated perspective provides a better understanding of the spatial arrangement of mitigation features, and how they can be

  18. Statistical methods for change-point detection in surface temperature records

    Science.gov (United States)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  19. Tree detection in urban regions from aerial imagery and DSM based on local maxima points

    Science.gov (United States)

    Korkmaz, Özgür; Yardımcı ćetin, Yasemin; Yilmaz, Erdal

    2017-05-01

    In this study, we propose an automatic approach for tree detection and classification in registered 3-band aerial images and associated digital surface models (DSM). The tree detection results can be used in 3D city modelling and urban planning. This problem is magnified when trees are in close proximity to each other or other objects such as rooftops in the scenes. This study presents a method for locating individual trees and estimation of crown size based on local maxima from DSM accompanied by color and texture information. For this purpose, segment level classifier trained for 10 classes and classification results are improved by analyzing the class probabilities of neighbour segments. Later, the tree classes under a certain height were eliminated using the Digital Terrain Model (DTM). For the tree classes, local maxima points are obtained and the tree radius estimate is made from the vertical and horizontal height profiles passing through these points. The final tree list containing the centers and radius of the trees is obtained by selecting from the list of tree candidates according to the overlapping and selection parameters. Although the limited number of train sets are used in this study, tree classification and localization results are competitive.

  20. Watershed-based point sources permitting strategy and dynamic permit-trading analysis.

    Science.gov (United States)

    Ning, Shu-Kuang; Chang, Ni-Bin

    2007-09-01

    Permit-trading policy in a total maximum daily load (TMDL) program may provide an additional avenue to produce environmental benefit, which closely approximates what would be achieved through a command and control approach, with relatively lower costs. One of the important considerations that might affect the effective trading mechanism is to determine the dynamic transaction prices and trading ratios in response to seasonal changes of assimilative capacity in the river. Advanced studies associated with multi-temporal spatially varied trading ratios among point sources to manage water pollution hold considerable potential for industries and policy makers alike. This paper aims to present an integrated simulation and optimization analysis for generating spatially varied trading ratios and evaluating seasonal transaction prices accordingly. It is designed to configure a permit-trading structure basin-wide and provide decision makers with a wealth of cost-effective, technology-oriented, risk-informed, and community-based management strategies. The case study, seamlessly integrating a QUAL2E simulation model with an optimal waste load allocation (WLA) scheme in a designated TMDL study area, helps understand the complexity of varying environmental resources values over space and time. The pollutants of concern in this region, which are eligible for trading, mainly include both biochemical oxygen demand (BOD) and ammonia-nitrogen (NH3-N). The problem solution, as a consequence, suggests an array of waste load reduction targets in a well-defined WLA scheme and exhibits a dynamic permit-trading framework among different sub-watersheds in the study area. Research findings gained in this paper may extend to any transferable dynamic-discharge permit (TDDP) program worldwide.

  1. Reduction of non-point source contaminants associated with road-deposited sediments by sweeping.

    Science.gov (United States)

    Kim, Do-Gun; Kang, Hee-Man; Ko, Seok-Oh

    2017-09-19

    Road-deposited sediments (RDS) on an expressway, residual RDS collected after sweeping, and RDS removed by means of sweeping were analyzed to evaluate the degree to which sweeping removed various non-point source contaminants. The total RDS load was 393.1 ± 80.3 kg/km and the RDS, residual RDS, and swept RDS were all highly polluted with organics, nutrients, and metals. Among the metals studied, Cu, Zn, Pb, Ni, Ca, and Fe were significantly enriched, and most of the contaminants were associated with particles within the size range from 63 μm to 2 mm. Sweeping reduced RDS and its associated contaminants by 33.3-49.1% on average. We also measured the biological oxygen demand (BOD) of RDS in the present work, representing to our knowledge the first time that this has been done; we found that RDS contains a significant amount of biodegradable organics and that the reduction of BOD by sweeping was higher than that of other contaminants. Significant correlations were found between the contaminants measured, indicating that the organics and the metals originated from both exhaust and non-exhaust particles. Meanwhile, the concentrations of Cu and Ni were higher in 63 μm-2 mm particles than in smaller particles, suggesting that some metals in RDS likely exist intrinsically in particles, rather than only as adsorbates on particle surfaces. Overall, the results in this study showed that sweeping to collect RDS can be a good alternative for reduction of contaminants in runoff.

  2. Interpolating precipitation and its relation to runoff and non-point source pollution.

    Science.gov (United States)

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  3. Relationship Between Non-Point Source Pollution and Korean Green Factor

    Directory of Open Access Journals (Sweden)

    Seung Chul Lee

    2015-01-01

    Full Text Available In determining the relationship between the rational event mean concentration (REMC which is a volume-weighted mean of event mean concentrations (EMCs as a non-point source (NPS pollution indicator and the green factor (GF as a low impact development (LID land use planning indicator, we constructed at runoff database containing 1483 rainfall events collected from 107 different experimental catchments from 19 references in Korea. The collected data showed that EMCs were not correlated with storm factors whereas they showed significant differences according to the land use types. The calculated REMCs for BOD, COD, TSS, TN, and TP showed negative correlations with the GFs. However, even though the GFs of the agricultural area were concentrated in values of 80 like the green areas, the REMCs for TSS, TN, and TP were especially high. There were few differences in REMC runoff characteristics according to the GFs such as recreational facilities areas in suburbs and highways and trunk roads that connect to major roads between major cities. Except for those areas, the REMCs for BOD and COD were significantly related to the GFs. The REMCs for BOD and COD decreased when the rate of natural green area increased. On the other hand, some of the REMCs for TSS, TN, and TP were still high where the catchments encountered mixed land use patterns, especially public facility areas with bare ground and artificial grassland areas. The GF could therefore be used as a major planning indicator when establishing land use planning aimed at sustainable development with NPS management in urban areas if the weighted GF values will be improved.

  4. Experimental properties of gluon and quark jets from a point source

    International Nuclear Information System (INIS)

    Abbiendi, G.; Ackerstaff, K.; Alexander, G.

    1999-01-01

    Gluon jets are identified in hadronic Z 0 decays as all the particles in a hemisphere opposite to a hemisphere containing two tagged quark jets. Gluon jets defined in this manner are equivalent to gluon jets produced from a color singlet point source and thus correspond to the definition employed for most theoretical calculations. In a separate stage of the analysis, we select quark jets in a manner to correspond to calculations, as the particles in hemispheres of flavor tagged light quark (uds) events. We present the distributions of rapidity, scaled energy, the logarithm of the momentum, and transverse momentum with respect to the jet axes, for charged particles in these gluon and quark jets. We also examine the charged particle multiplicity distributions of the jets in restricted intervals of rapidity. For soft particles at large p T , we observe the charged particle multiplicity ratio of gluon to quark jets to be 2.29±0.09(stat.)±0.15(syst.), in agreement with the prediction that this ratio should approximately equal the ratio of QCD color factors, C A /C F =2.25. The intervals used to define soft particles and large p T for this result, p T < 3.0 GeV/c, are motivated by the predictions of the Herwig Monte Carlo multihadronic event generator. Additionally, our gluon jet data allow a sensitive test of the phenomenon of non-leading QCD terms known as color reconnection. We test the model of color reconnection implemented in the Ariadne Monte Carlo multihadronic event generator and find it to be disfavored by our data. (orig.)

  5. Stochastic Management of Non-Point Source Contamination: Joint Impact of Aquifer Heterogeneity and Well Characteristics

    Science.gov (United States)

    Henri, C. V.; Harter, T.

    2017-12-01

    Agricultural activities are recognized as the preeminent origin of non-point source (NPS) contamination of water bodies through the leakage of nitrate, salt and agrochemicals. A large fraction of world agricultural activities and therefore NPS contamination occurs over unconsolidated alluvial deposit basins offering soil composition and topography favorable to productive farming. These basins represent also important groundwater reservoirs. The over-exploitation of aquifers coupled with groundwater pollution by agriculture-related NPS contaminant has led to a rapid deterioration of the quality of these groundwater basins. The management of groundwater contamination from NPS is challenged by the inherent complexity of aquifers systems. Contaminant transport dynamics are highly uncertain due to the heterogeneity of hydraulic parameters controlling groundwater flow. Well characteristics are also key uncertain elements affecting pollutant transport and NPS management but quantifying uncertainty in NPS management under these conditions is not well documented. Our work focuses on better understanding the joint impact of aquifer heterogeneity and pumping well characteristics (extraction rate and depth) on (1) the transport of contaminants from NPS and (2) the spatio-temporal extension of the capture zone. To do so, we generate a series of geostatistically equivalent 3D heterogeneous aquifers and simulate the flow and non-reactive solute transport from NPS to extraction wells within a stochastic framework. The propagation of the uncertainty on the hydraulic conductivity field is systematically analyzed. A sensitivity analysis of the impact of extraction well characteristics (pumping rate and screen depth) is also conducted. Results highlight the significant role that heterogeneity and well characteristics plays on management metrics. We finally show that, in case of NPS contamination, the joint impact of regional longitudinal and transverse vertical hydraulic gradients and

  6. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    Science.gov (United States)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of

  7. Detection of extended galactic sources with an underwater neutrino telescope

    International Nuclear Information System (INIS)

    Leisos, A.; Tsirigotis, A. G.; Tzamarias, S. E.; Lenis, D.

    2014-01-01

    In this study we investigate the discovery capability of a Very Large Volume Neutrino Telescope to Galactic extended sources. We focus on the brightest HESS gamma rays sources which are considered also as very high energy neutrino emitters. We use the unbinned method taking into account both the spatial and the energy distribution of high energy neutrinos and we investigate parts of the Galactic plane where nearby potential neutrino emitters form neutrino source clusters. Neutrino source clusters as well as isolated neutrino sources are combined to estimate the observation period for 5 sigma discovery of neutrino signals from these objects

  8. Observation of Point-Light-Walker Locomotion Induces Motor Resonance When Explicitly Represented; An EEG Source Analysis Study

    Directory of Open Access Journals (Sweden)

    Alberto Inuggi

    2018-03-01

    Full Text Available Understanding human motion, to infer the goal of others' actions, is thought to involve the observer's motor repertoire. One prominent class of actions, the human locomotion, has been object of several studies, all focused on manipulating the shape of degraded human figures like point-light walker (PLW stimuli, represented as walking on the spot. Nevertheless, since the main goal of the locomotor function is to displace the whole body from one position to the other, these stimuli might not fully represent a goal-directed action and thus might not be able to induce the same motor resonance mechanism expected when observing a natural locomotion. To explore this hypothesis, we recorded the event-related potentials (ERP of canonical/scrambled and translating/centered PLWs decoding. We individuated a novel ERP component (N2c over central electrodes, around 435 ms after stimulus onset, for translating compared to centered PLW, only when the canonical shape was preserved. Consistently with our hypothesis, sources analysis associated this component to the activation of trunk and lower legs primary sensory-motor and supplementary motor areas. These results confirm the role of own motor repertoire in processing human action and suggest that ERP can detect the associated motor resonance only when the human figure is explicitly involved in performing a meaningful action.

  9. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    Science.gov (United States)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    considerably varying considerably because of the various base points that were needed to cover the whole landslide. The resulting point spacing is approximately 20 cm. The achievable accuracy was about 10 cm. The airborne data was acquired with mean point densities of 2 points per square-meter. The accuracy of this dataset was about 15 cm. The second testing site is an area of the Leithagebirge in Burgenland, Austria. The data was acquired by an airborne Riegl LMS-Q560 laser scanner mounted on a helicopter. The mean point density was 6-8 points per square with an accuracy better than 10 cm. We applied our processing chain on the datasets individually. First, they were transformed to local reference frames and fine adjustments of the individual scans respectively flight strips were applied. Subsequently, the local regression planes were determined for each point of the point clouds and planar features were extracted by means of the proposed approach. It turned out that even small displacements can be detected if the number of points used for the fit is enough to define a parallel but somewhat displaced plane. Smaller cracks and erosional incisions do not disturb the plane fitting, because mostly they are filtered out as outliers. A comparison of the different campaigns of the Doren site showed exciting matches of the detected geomorphic structures. Although the geomorphic structure of the Leithagebirge differs from the Doren landslide, and the scales of the two studies were also different, reliable results were achieved in both cases. Additionally, the approach turned out to be highly robust against points which were not located on the terrain. Hence, no false positives were determined within the dense vegetation above the terrain, while it was possible to cover the investigated areas completely with reliable planes. In some cases, however, some structures in the tree crowns were also recognized, but these small patches could be very well sorted out from the geomorphically

  10. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Science.gov (United States)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  11. Design of a Binocular Pupil and Gaze Point Detection System Utilizing High Definition Images

    Directory of Open Access Journals (Sweden)

    Yilmaz Durna

    2017-05-01

    Full Text Available This study proposes a novel binocular pupil and gaze detection system utilizing a remote full high definition (full HD camera and employing LabVIEW. LabVIEW is inherently parallel and has fewer time-consuming algorithms. Many eye tracker applications are monocular and use low resolution cameras due to real-time image processing difficulties. We utilized the computer’s direct access memory channel for rapid data transmission and processed full HD images with LabVIEW. Full HD images make easier determinations of center coordinates/sizes of pupil and corneal reflection. We modified the camera so that the camera sensor passed only infrared (IR images. Glints were taken as reference points for region of interest (ROI area selection of the eye region in the face image. A morphologic filter was applied for erosion of noise, and a weighted average technique was used for center detection. To test system accuracy with 11 participants, we produced a visual stimulus set up to analyze each eye’s movement. Nonlinear mapping function was utilized for gaze estimation. Pupil size, pupil position, glint position and gaze point coordinates were obtained with free natural head movements in our system. This system also works at 2046 × 1086 resolution at 40 frames per second. It is assumed that 280 frames per second for 640 × 480 pixel images is the case. Experimental results show that the average gaze detection error for 11 participants was 0.76° for the left eye, 0.89° for right eye and 0.83° for the mean of two eyes.

  12. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    Science.gov (United States)

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  13. Identifying and characterizing major emission point sources as a basis for geospatial distribution of mercury emissions inventories

    Science.gov (United States)

    Steenhuisen, Frits; Wilson, Simon J.

    2015-07-01

    Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national

  14. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    Science.gov (United States)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  15. Detecting change points in VIX and S&P 500: A new approach to dynamic asset allocation

    DEFF Research Database (Denmark)

    Nystrup, Peter; Hansen, Bo William; Madsen, Henrik

    2016-01-01

    to DAA that is based on detection of change points without fitting a model with a fixed number of regimes to the data, without estimating any parameters and without assuming a specific distribution of the data. It is examined whether DAA is most profitable when based on changes in the Chicago Board...... Options Exchange Volatility Index or change points detected in daily returns of the S&P 500 index. In an asset universe consisting of the S&P 500 index and cash, it is shown that a dynamic strategy based on detected change points significantly improves the Sharpe ratio and reduces the drawdown risk when...

  16. Role of rural solid waste management in non-point source pollution control of Dianchi Lake catchments, China

    Institute of Scientific and Technical Information of China (English)

    Wenjing LU; Hongtao WANG

    2008-01-01

    In recent years, with control of the main municipal and industrial point pollution sources and implementation of cleaning for some inner pollution sources in the water body, the discharge of point source pollution decreased gradually, while non-point source pollution has become increasingly distressing in Dianchi Lake catchments. As one of the major targets in non-point source pollution control, an integrated solid waste controlling strategy combined with a technological solution and management system was proposed and implemented based on the waste disposal situation and characteristics of rural solid waste in the demonstration area. As the key technoogy in rural solid waste treatment, both centralized plantscale composting and a dispersed farmer-operated waste treating system showed promise in rendering timely benefits in efficiency, large handling capacity, high quality of the end product, as well as good economic return. Problems encountered during multi-substrates co-com-posting such as pathogens, high moisture content, asyn-chronism in the decomposition of different substrates, and low quality of the end product can all be tackled. 92.5% of solid waste was collected in the demonstration area, while the treating and recycling ratio reached 87.9%, which pre-vented 32.2 t nitrogen and 3.9 t phosphorus per year from entering the water body of Dianchi Lake after imple-mentation of the project.

  17. Modelling the transport of solid contaminants originated from a point source

    Science.gov (United States)

    Salgueiro, Dora V.; Conde, Daniel A. S.; Franca, Mário J.; Schleiss, Anton J.; Ferreira, Rui M. L.

    2017-04-01

    The solid phases of natural flows can comprise an important repository for contaminants in aquatic ecosystems and can propagate as turbidity currents generating a stratified environment. Contaminants can be desorbed under specific environmental conditions becoming re-suspended, with a potential impact on the aquatic biota. Forecasting the distribution of the contaminated turbidity current is thus crucial for a complete assessment of environmental exposure. In this work we validate the ability of the model STAV-2D, developed at CERIS (IST), to simulate stratified flows such as those resulting from turbidity currents in complex geometrical environments. The validation involves not only flow phenomena inherent to flows generated by density imbalance but also convective effects brought about by the complex geometry of the water basin where the current propagates. This latter aspect is of paramount importance since, in real applications, currents may propagate in semi-confined geometries in plan view, generating important convective accelerations. Velocity fields and mass distributions obtained from experiments carried out at CERIS - (IST) are used as validation data for the model. The experimental set-up comprises a point source in a rectangular basin with a wall placed perpendicularly to the outer walls. Thus generates a complex 2D flow with an advancing wave front and shocks due to the flow reflection from the walls. STAV-2D is based on the depth- and time-averaged mass and momentum equations for mixtures of water and sediment, understood as continua. It is closed in terms of flow resistance and capacity bedload discharge by a set of classic closure models and a specific high concentration formulation. The two-layer model is derived from layer-averaged Navier-Stokes equations, resulting in a system of layer-specific non-linear shallow-water equations, solved through explicit first or second-order schemes. According to the experimental data for mass distribution, the

  18. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Science.gov (United States)

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  19. Modeling non-point source pollutants in the vadose zone: Back to the basics

    Science.gov (United States)

    Corwin, Dennis L.; Letey, John, Jr.; Carrillo, Marcia L. K.

    More than ever before in the history of scientific investigation, modeling is viewed as a fundamental component of the scientific method because of the relatively recent development of the computer. No longer must the scientific investigator be confined to artificially isolated studies of individual processes that can lead to oversimplified and sometimes erroneous conceptions of larger phenomena. Computer models now enable scientists to attack problems related to open systems such as climatic change, and the assessment of environmental impacts, where the whole of the interactive processes are greater than the sum of their isolated components. Environmental assessment involves the determination of change of some constituent over time. This change can be measured in real time or predicted with a model. The advantage of prediction, like preventative medicine, is that it can be used to alter the occurrence of potentially detrimental conditions before they are manifest. The much greater efficiency of preventative, rather than remedial, efforts strongly justifies the need for an ability to accurately model environmental contaminants such as non-point source (NPS) pollutants. However, the environmental modeling advances that have accompanied computer technological development are a mixed blessing. Where once we had a plethora of discordant data without a holistic theory, now the pendulum has swung so that we suffer from a growing stockpile of models of which a significant number have never been confirmed or even attempts made to confirm them. Modeling has become an end in itself rather than a means because of limited research funding, the high cost of field studies, limitations in time and patience, difficulty in cooperative research and pressure to publish papers as quickly as possible. Modeling and experimentation should be ongoing processes that reciprocally enhance one another with sound, comprehensive experiments serving as the building blocks of models and models

  20. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Science.gov (United States)

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  1. Taming the beast : Free and open-source massive point cloud web visualization

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Verhoeven, S.; Van Meersbergen, M.; Schûtz, M.; Van Oosterom, P.; Gonçalves, R.; Tijssen, T.

    2015-01-01

    Powered by WebGL, some renderers have recently become available for the visualization of point cloud data over the web, for example Plasio or Potree. We have extended Potree to be able to visualize massive point clouds and we have successfully used it with the second national Lidar survey of the

  2. Use of dew-point detection for quantitative measurement of sweating rate

    Science.gov (United States)

    Brengelmann, G. L.; Mckeag, M.; Rowell, L. B.

    1975-01-01

    A method of measuring sweat rate (SR) based on detection of dew point (DP) is proposed which has advantages that may be attractive to other laboratories concerned with recording SR from selected areas of skin. It is similar to other methods in that dry gas is passed through a capsule which isolates several square centimeters of skin surface. The difference is in the means of determining how much gaseous water is carried off in the effluent moist gas. The DP detector used is free of the drawbacks of previous devices. DP is obtained through the fundamental technique of determining the temperature at which condensate forms on a mirror. Variations in DP are tracked rapidly, and accurately (+ or - 0.8 C nominal, sensitivity + or - 0.05 C) over a wide range ( -40 C to +50 C) without measurable hysteresis. The detector asembly is rugged and readily opened for cleaning and inspection.

  3. Aging Detection of Electrical Point Machines Based on Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Jaewon Sa

    2017-11-01

    Full Text Available Electrical point machines (EPM must be replaced at an appropriate time to prevent the occurrence of operational safety or stability problems in trains resulting from aging or budget constraints. However, it is difficult to replace EPMs effectively because the aging conditions of EPMs depend on the operating environments, and thus, a guideline is typically not be suitable for replacing EPMs at the most timely moment. In this study, we propose a method of classification for the detection of an aging effect to facilitate the timely replacement of EPMs. We employ support vector data description to segregate data of “aged” and “not-yet-aged” equipment by analyzing the subtle differences in normalized electrical signals resulting from aging. Based on the before and after-replacement data that was obtained from experimental studies that were conducted on EPMs, we confirmed that the proposed method was capable of classifying machines based on exhibited aging effects with adequate accuracy.

  4. Power-limited low-thrust trajectory optimization with operation point detection

    Science.gov (United States)

    Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng

    2018-06-01

    The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.

  5. 3D Sensor-Based Obstacle Detection Comparing Octrees and Point clouds Using CUDA

    Directory of Open Access Journals (Sweden)

    K.B. Kaldestad

    2012-10-01

    Full Text Available This paper presents adaptable methods for achieving fast collision detection using the GPU and Nvidia CUDA together with Octrees. Earlier related work have focused on serial methods, while this paper presents a parallel solution which shows that there is a great increase in time if the number of operations is large. Two different models of the environment and the industrial robot are presented, the first is Octrees at different resolutions, the second is a point cloud representation. The relative merits of the two different world model representations are shown. In particular, the experimental results show the potential of adapting the resolution of the robot and environment models to the task at hand.

  6. Sensorless speed detection of squirrel-cage induction machines using stator neutral point voltage harmonics

    Science.gov (United States)

    Petrovic, Goran; Kilic, Tomislav; Terzic, Bozo

    2009-04-01

    In this paper a sensorless speed detection method of induction squirrel-cage machines is presented. This method is based on frequency determination of the stator neutral point voltage primary slot harmonic, which is dependent on rotor speed. In order to prove method in steady state and dynamic conditions the simulation and experimental study was carried out. For theoretical investigation the mathematical model of squirrel cage induction machines, which takes into consideration actual geometry and windings layout, is used. Speed-related harmonics that arise from rotor slotting are analyzed using digital signal processing and DFT algorithm with Hanning window. The performance of the method is demonstrated over a wide range of load conditions.

  7. Calibration of angle response of a NaI(Tl) airborne spectrometer to 137Cs and 60Co point sources on the ground

    International Nuclear Information System (INIS)

    Liu Xinhua; Zhang Yongxing; Gu Renkang; Shen Ensheng

    1998-01-01

    The angle response function F(φ,θ) is a basic calibration of airborne spectrometers in airborne surveying for nuclear emergency monitoring. The author describes the method and results of angle response function calibration of a NaI(Tl) airborne spectrometer for 137 Cs and 60 Co point sources on the ground, with less than 20% uncertainty. By using the results, the calibration factors of the NaI(Tl) airborne spectrometer fixed in Yun-5 plane at different flying heights are calculated by numerical integral method for 137 Cs uniform area source on ground surface, with less than 25% uncertainty. The minimum detection limits (L D ) are calculated at 90 m and 120 m flying heights in the range of over Shijiazhuang airborne surveying for 137 Cs uniform area source on ground surface to be 3.83 and 5.62 kBq/m 2 , respectively

  8. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  9. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  10. AN IN-DEPTH VIEW OF THE MID-INFRARED PROPERTIES OF POINT SOURCES AND THE DIFFUSE ISM IN THE SMC GIANT H II REGION, N66

    International Nuclear Information System (INIS)

    Whelan, David G.; Johnson, Kelsey E.; Indebetouw, Rémy; Lebouteiller, Vianney; Galliano, Frédéric; Peeters, Els; Bernard-Salas, Jeronimo; Brandl, Bernhard R.

    2013-01-01

    The focus of this work is to study mid-infrared point sources and the diffuse interstellar medium (ISM) in the low-metallicity (∼0.2 Z ☉ ) giant H II region N66 in order to determine properties that may shed light on star formation in these conditions. Using the Spitzer Space Telescope's Infrared Spectrograph, we study polycyclic aromatic hydrocarbon (PAH), dust continuum, silicate, and ionic line emission from 14 targeted infrared point sources as well as spectra of the diffuse ISM that is representative of both the photodissociation regions (PDRs) and the H II regions. Among the point source spectra, we spectroscopically confirm that the brightest mid-infrared point source is a massive embedded young stellar object, we detect silicates in emission associated with two young stellar clusters, and we see spectral features of a known B[e] star that are commonly associated with Herbig Be stars. In the diffuse ISM, we provide additional evidence that the very small grain population is being photodestroyed in the hard radiation field. The 11.3 μm PAH complex emission exhibits an unexplained centroid shift in both the point source and ISM spectra that should be investigated at higher signal-to-noise and resolution. Unlike studies of other regions, the 6.2 μm and 7.7 μm band fluxes are decoupled; the data points cover a large range of I 7.7 /I 11.3 PAH ratio values within a narrow band of I 6.2 /I 11.3 ratio values. Furthermore, there is a spread in PAH ionization, being more neutral in the dense PDR where the radiation field is relatively soft, but ionized in the diffuse ISM/PDR. By contrast, the PAH size distribution appears to be independent of local ionization state. Important to unresolved studies of extragalactic low-metallicity star-forming regions, we find that emission from the infrared-bright point sources accounts for only 20%-35% of the PAH emission from the entire region. These results make a comparative data set to other star-forming regions with

  11. Validation of novel calibration scheme with traceable point-like (22)Na sources on six types of PET scanners.

    Science.gov (United States)

    Hasegawa, Tomoyuki; Oda, Keiichi; Wada, Yasuhiro; Sasaki, Toshiaki; Sato, Yasushi; Yamada, Takahiro; Matsumoto, Mikio; Murayama, Hideo; Kikuchi, Kei; Miyatake, Hiroki; Abe, Yutaka; Miwa, Kenta; Akimoto, Kenta; Wagatsuma, Kei

    2013-05-01

    To improve the reliability and convenience of the calibration procedure of positron emission tomography (PET) scanners, we have been developing a novel calibration path based on traceable point-like sources. When using (22)Na sources, special care should be taken to avoid the effects of 1.275-MeV γ rays accompanying β (+) decays. The purpose of this study is to validate this new calibration scheme with traceable point-like (22)Na sources on various types of PET scanners. Traceable point-like (22)Na sources with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons were used. The tested PET scanners included a clinical whole-body PET scanner, four types of clinical PET/CT scanners from different manufacturers, and a small-animal PET scanner. The region of interest (ROI) diameter dependence of ROI values was represented with a fitting function, which was assumed to consist of a recovery part due to spatial resolution and a quadratic background part originating from the scattered γ rays. The observed ROI radius dependence was well represented with the assumed fitting function (R (2) > 0.994). The calibration factors determined using the point-like sources were consistent with those by the standard cross-calibration method within an uncertainty of ±4 %, which was reasonable considering the uncertainty in the standard cross-calibration method. This novel calibration scheme based on the use of traceable (22)Na point-like sources was successfully validated for six types of commercial PET scanners.

  12. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  13. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  14. Effective Detection of Sub-Surface Archeological Features from Laser Scanning Point Clouds and Imagery Data

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Walczykowski, P.; Wierzbicki, D.; Delis, P.; Lada, A.

    2017-08-01

    The archaeological heritage is non-renewable, and any invasive research or other actions leading to the intervention of mechanical or chemical into the ground lead to the destruction of the archaeological site in whole or in part. For this reason, modern archeology is looking for alternative methods of non-destructive and non-invasive methods of new objects identification. The concept of aerial archeology is relation between the presence of the archaeological site in the particular localization, and the phenomena that in the same place can be observed on the terrain surface form airborne platform. One of the most appreciated, moreover, extremely precise, methods of such measurements is airborne laser scanning. In research airborne laser scanning point cloud with a density of 5 points/sq. m was used. Additionally unmanned aerial vehicle imagery data was acquired. Test area is located in central Europe. The preliminary verification of potentially microstructures localization was the creation of digital terrain and surface models. These models gave an information about the differences in elevation, as well as regular shapes and sizes that can be related to the former settlement/sub-surface feature. The paper presents the results of the detection of potentially sub-surface microstructure fields in the forestry area.

  15. EFFECTIVE DETECTION OF SUB-SURFACE ARCHEOLOGICAL FEATURES FROM LASER SCANNING POINT CLOUDS AND IMAGERY DATA

    Directory of Open Access Journals (Sweden)

    A. Fryskowska

    2017-08-01

    Full Text Available The archaeological heritage is non-renewable, and any invasive research or other actions leading to the intervention of mechanical or chemical into the ground lead to the destruction of the archaeological site in whole or in part. For this reason, modern archeology is looking for alternative methods of non-destructive and non-invasive methods of new objects identification. The concept of aerial archeology is relation between the presence of the archaeological site in the particular localization, and the phenomena that in the same place can be observed on the terrain surface form airborne platform. One of the most appreciated, moreover, extremely precise, methods of such measurements is airborne laser scanning. In research airborne laser scanning point cloud with a density of 5 points/sq. m was used. Additionally unmanned aerial vehicle imagery data was acquired. Test area is located in central Europe. The preliminary verification of potentially microstructures localization was the creation of digital terrain and surface models. These models gave an information about the differences in elevation, as well as regular shapes and sizes that can be related to the former settlement/sub-surface feature. The paper presents the results of the detection of potentially sub-surface microstructure fields in the forestry area.

  16. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    Science.gov (United States)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  17. INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION

    Directory of Open Access Journals (Sweden)

    L. Díaz-Vilariño

    2016-06-01

    Full Text Available In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  18. A modified likelihood-method to search for point-sources in the diffuse astrophysical neutrino-flux in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Reimann, Rene; Haack, Christian; Leuermann, Martin; Raedel, Leif; Schoenen, Sebastian; Schimp, Michael; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    IceCube, a cubic-kilometer sized neutrino detector at the geographical South Pole, has recently measured a flux of high-energy astrophysical neutrinos. Although this flux has now been observed in multiple analyses, no point sources or source classes could be identified yet. Standard point source searches test many points in the sky for a point source of astrophysical neutrinos individually and therefore produce many trials. Our approach is to additionally use the measured diffuse spectrum to constrain the number of possible point sources and their properties. Initial studies of the method performance are shown.

  19. Design and evaluation of aircraft heat source systems for use with high-freezing point fuels

    Science.gov (United States)

    Pasion, A. J.

    1979-01-01

    The objectives were the design, performance and economic analyses of practical aircraft fuel heating systems that would permit the use of high freezing-point fuels on long-range aircraft. Two hypothetical hydrocarbon fuels with freezing points of -29 C and -18 C were used to represent the variation from current day jet fuels. A Boeing 747-200 with JT9D-7/7A engines was used as the baseline aircraft. A 9300 Km mission was used as the mission length from which the heat requirements to maintain the fuel above its freezing point was based.

  20. Laser-induced fluorescence detection platform for point-of-care testing

    Science.gov (United States)

    Berner, Marcel; Hilbig, Urs; Schubert, Markus B.; Gauglitz, Günter

    2017-08-01

    Point-of-care testing (POCT) devices for continuous low-cost monitoring of critical patient parameters require miniaturized and integrated setups for performing quick high-sensitivity analyses, away from central clinical laboratories. This work presents a novel and promising laser-induced fluorescence platform for measurements in direct optical test formats that leads towards such powerful POCT devices based on fluorescence-labeled immunoassays. Ultimate sensitivity of thin film photodetectors, integrated with microfluidics, and a comprehensive optimization of all system components aim at low-level signal detection in the targeted biosensor application. The setup acquires fluorescence signals from the volume of a microfluidic channel. An innovative sandwiching process forms a flow channel in the microfluidic chips by embedding laser-cut double-sided adhesive tapes. The custom fit of amorphous silicon based photodiode arrays to the geometry of the flow channel enables miniaturization, fully adequate for POCT devices. A free-beam laser excitation with line focus provides excellent alignment stability, allows for easy and reliable swapping of the disposable microfluidic chips, and therewith greatly improves the ease of use of the resulting integrated device. As a proof-of-concept of this novel in-volume measurement approach, the limit of detection for the dye DY636-COOH in pure water as a model fluorophore is examined and found to be 26 nmol l-1 .

  1. Seeing the Point: Using Visual Sources to Understand the Arguments for Women's Suffrage

    Science.gov (United States)

    Card, Jane

    2011-01-01

    Visual sources, Jane Card argues, are a powerful resource for historical learning but using them in the classroom requires careful thought and planning. Card here shares how she has used visual source material in order to teach her students about the women's suffrage movement. In particular, Card shows how a chain of questions that moves from the…

  2. Risk-based prioritisation of point sources through assessment of the impact on a water supply

    DEFF Research Database (Denmark)

    Overheu, Niels D.; Tuxen, Nina; Troldborg, Mads

    2011-01-01

    vulnerability mapping, site-specific mass flux estimates on a local scale from all the sources, and 3-D catchment-scale fate and transport modelling. It handles sources at various knowledge levels and accounts for uncertainties. The tool estimates the impacts on the water supply in the catchment and provides...

  3. Morphology, chemistry and distribution of neoformed spherulites in agricultural land affected by metallurgical point-source pollution

    NARCIS (Netherlands)

    Leguedois, S.; Oort, van F.; Jongmans, A.G.; Chevalier, P.

    2004-01-01

    Metal distribution patterns in superficial soil horizons of agricultural land affected by metallurgical point-source pollution were studied using optical and electron microscopy, synchrotron radiation and spectroscopy analyses. The site is located in northern France, at the center of a former entry

  4. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults

    NARCIS (Netherlands)

    Ouwehand, Kim; Gog, Tamara van; Paas, Fred

    2016-01-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether

  5. Diagnostic accuracy of point shear wave elastography in the detection of portal hypertension in pediatric patients.

    Science.gov (United States)

    Burak Özkan, M; Bilgici, M C; Eren, E; Caltepe, G

    2018-03-01

    The purpose of this study was to determine the usefulness of point shear wave elastography (p-SWE) of the liver and spleen for the detection of portal hypertension in pediatric patients. The study consisted of 38 healthy children and 56 pediatric patients with biopsy-proven liver disease who underwent splenic and liver p-SWE. The diagnostic performance of p-SWE in detecting clinically significant portal hypertension was assessed using receiver operating characteristic (ROC) curves. Reliable measurements of splenic and liver stiffness with p-SWE were obtained in 76/94 (81%) and 80/94 patients (85%), respectively. The splenic stiffness was highest in the portal hypertension group (Pportal hypertension was lower for splenic p-SWE than for liver p-SWE (0.906 vs. 0.746; P=0.0239). The cut-off value of splenic p-SWE for portal hypertension was 3.14m/s, with a specificity of 98.59% and a sensitivity of 68.18%. The cut-off value of liver p-SWE for portal hypertension was 2.09m/s, with a specificity of 80.28% and a sensitivity of 77.27%. In pediatric patients, p-SWE is a reliable method for detecting portal hypertension. However, splenic p-SWE is less accurate than liver p-SWE for the diagnosis of portal hypertension. Copyright © 2017 Editions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.

  6. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating

  7. The Potential for Electrofuels Production in Sweden Utilizing Fossil and Biogenic CO{sub 2} Point Sources

    Energy Technology Data Exchange (ETDEWEB)

    Hansson, Julia, E-mail: julia.hansson@ivl.se [Climate and Sustainable Cities, IVL Swedish Environmental Research Institute, Stockholm (Sweden); Division of Physical Resource Theory, Department of Energy and Environment, Chalmers University of Technology, Göteborg (Sweden); Hackl, Roman [Climate and Sustainable Cities, IVL Swedish Environmental Research Institute, Stockholm (Sweden); Taljegard, Maria [Division of Energy Technology, Department of Energy and Environment, Chalmers University of Technology, Göteborg (Sweden); Brynolf, Selma; Grahn, Maria [Division of Physical Resource Theory, Department of Energy and Environment, Chalmers University of Technology, Göteborg (Sweden)

    2017-03-13

    This paper maps, categorizes, and quantifies all major point sources of carbon dioxide (CO{sub 2}) emissions from industrial and combustion processes in Sweden. The paper also estimates the Swedish technical potential for electrofuels (power-to-gas/fuels) based on carbon capture and utilization. With our bottom-up approach using European databases, we find that Sweden emits approximately 50 million metric tons of CO{sub 2} per year from different types of point sources, with 65% (or about 32 million tons) from biogenic sources. The major sources are the pulp and paper industry (46%), heat and power production (23%), and waste treatment and incineration (8%). Most of the CO{sub 2} is emitted at low concentrations (<15%) from sources in the southern part of Sweden where power demand generally exceeds in-region supply. The potentially recoverable emissions from all the included point sources amount to 45 million tons. If all the recoverable CO{sub 2} were used to produce electrofuels, the yield would correspond to 2–3 times the current Swedish demand for transportation fuels. The electricity required would correspond to about 3 times the current Swedish electricity supply. The current relatively few emission sources with high concentrations of CO{sub 2} (>90%, biofuel operations) would yield electrofuels corresponding to approximately 2% of the current demand for transportation fuels (corresponding to 1.5–2 TWh/year). In a 2030 scenario with large-scale biofuels operations based on lignocellulosic feedstocks, the potential for electrofuels production from high-concentration sources increases to 8–11 TWh/year. Finally, renewable electricity and production costs, rather than CO{sub 2} supply, limit the potential for production of electrofuels in Sweden.

  8. The 100 strongest radio point sources in the field of the Large Magellanic Cloud at 1.4 GHz

    Directory of Open Access Journals (Sweden)

    Payne J.L.

    2009-01-01

    Full Text Available We present the 100 strongest 1.4 GHz point sources from a new mosaic image in the direction of the Large Magellanic Cloud (LMC. The observations making up the mosaic were made using Australia Telescope Compact Array (ATCA over a ten year period and were combined with Parkes single dish data at 1.4 GHz to complete the image for short spacing. An initial list of co-identifications within 1000 at 0.843, 4.8 and 8.6 GHz consisted of 2682 sources. Elimination of extended objects and artifact noise allowed the creation of a refined list containing 1988 point sources. Most of these are presumed to be background objects seen through the LMC; a small portion may represent compact H ii regions, young SNRs and radio planetary nebulae. For the 1988 point sources we find a preliminary average spectral index (α of -0.53 and present a 1.4 GHz image showing source location in the direction of the LMC.

  9. The 100 Strongest Radio Point Sources in the Field of the Large Magellanic Cloud at 1.4 GHz

    Directory of Open Access Journals (Sweden)

    Payne, J. L.

    2009-06-01

    Full Text Available We present the 100 strongest 1.4~GHz point sources from a new mosaicimage in the direction of the Large Magellanic Cloud (LMC. The observationsmaking up the mosaic were made using Australia Telescope Compact Array (ATCAover a ten year period and were combined with Parkes single dish data at 1.4 GHz to complete the image for short spacing. An initial list of co-identifications within 10arcsec at 0.843, 4.8 and 8.6 GHz consisted of 2682 sources. Elimination of extended objects and artifact noise allowed the creation of a refined list containing 1988 point sources. Most of these are presumed to be background objects seen through the LMC; a small portion may represent compact HII regions, young SNRs and radio planetary nebulae. For the 1988 point sources we find a preliminary average spectral index ($alpha$ of -0.53 and present a 1.4 GHz image showing source locationin the direction of the LMC.

  10. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    Science.gov (United States)

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  11. Discovery of a point-like very-high-energy gamma-ray source in Monoceros

    International Nuclear Information System (INIS)

    Aharonian, F.A.; Benbow, W.; Berge, D.; Bernlohr, K.; Bolz, O.; Braun, I.; Buhler, R.; Carrigan, S.; Costamante, L.; Domainko, W.; Egberts, K.; Forster, A.; Funk, S.; Hauser, D.; Hermann, G.; Hinton, J.A.; Hofmann, W.; Hoppe, S.; Khelifi, B.; Kosack, K.; Masterson, C.; Panter, M.; Rowell, G.; van Eldik, C.; Volk, H.J.; Akhperjanian, A.G.; Sahakian, V.; Bazer-Bachi, A.R.; Borrel, V.; Marcowith, A.; Olive, J.P.; Beilicke, M.; Cornils, R.; Heinzelmann, G.; Raue, M.; Ripken, J.; Bernlohr, K.; Funk, Seb.; Fussling, M.; Kerschhaggl, M.; Lohse, T.; Schlenker, S.; Schwanke, U.; Boisson, C.; Martin, J.M.; Sol, H.; Brion, E.; Glicenstein, J.F.; Goret, P.; Moulin, E.; Rolland, L.

    2007-01-01

    Aims. The complex Monoceros Loop SNR/Rosette Nebula region contains several potential sources of very-high-energy (VHE) γ-ray emission and two as yet unidentified high-energy EGRET sources. Sensitive VHE observations are required to probe acceleration processes in this region. Methods. The HESS telescope array has been used to search for very high-energy gamma-ray sources in this region. CO data from the NANTEN telescope were used to map the molecular clouds in the region, which could act as target material for γ-ray production via hadronic interactions. Results. We announce the discovery of a new γ-ray source, HESS J0632+057, located close to the rim of the Monoceros SNR. This source is unresolved by HESS and has no clear counterpart at other wavelengths but is possibly associated with the weak X-ray source 1RXS J063258.3+054857, the Be-star MWC148 and/or the lower energy γ-ray source 3EGJ0634+0521. No evidence for an associated molecular cloud was found in the CO data. (authors)

  12. Impact of Point and Non-point Source Pollution on Coral Reef Ecosystems In Mamala Bay, Oahu, Hawaii based on Water Quality Measurements and Benthic Surveys in 1993-1994 (NODC Accession 0001172)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The effects of both point and non-point sources of pollution on coral reef ecosystems in Mamala Bay were studied at three levels of biological organization; the...

  13. Comparison of point-source pollutant loadings to soil and groundwater for 72 chemical substances.

    Science.gov (United States)

    Yu, Soonyoung; Hwang, Sang-Il; Yun, Seong-Taek; Chae, Gitak; Lee, Dongsu; Kim, Ki-Eun

    2017-11-01

    Fate and transport of 72 chemicals in soil and groundwater were assessed by using a multiphase compositional model (CompFlow Bio) because some of the chemicals are non-aqueous phase liquids or solids in the original form. One metric ton of chemicals were assumed to leak in a stylized facility. Scenarios of both surface spills and subsurface leaks were considered. Simulation results showed that the fate and transport of chemicals above the water table affected the fate and transport of chemicals below the water table, and vice versa. Surface spill scenarios caused much less concentrations than subsurface leak scenarios because leaching amounts into the subsurface environment were small (at most 6% of the 1 t spill for methylamine). Then, simulation results were applied to assess point-source pollutant loadings to soil and groundwater above and below the water table, respectively, by multiplying concentrations, impact areas, and durations. These three components correspond to the intensity of contamination, mobility, and persistency in the assessment of pollutant loading, respectively. Assessment results showed that the pollutant loadings in soil and groundwater were linearly related (r 2  = 0.64). The pollutant loadings were negatively related with zero-order and first-order decay rates in both soil (r = - 0.5 and - 0.6, respectively) and groundwater (- 1.0 and - 0.8, respectively). In addition, this study scientifically defended that the soil partitioning coefficient (K d ) significantly affected the pollutant loadings in soil (r = 0.6) and the maximum masses in groundwater (r = - 0.9). However, K d was not a representative factor for chemical transportability unlike the expectation in chemical ranking systems of soil and groundwater pollutants. The pollutant loadings estimated using a physics-based hydrogeological model provided a more rational ranking for exposure assessment, compared to the summation of persistency and transportability scores in

  14. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  15. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  16. A method to analyze “source–sink” structure of non-point source pollution based on remote sensing technology

    International Nuclear Information System (INIS)

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-01-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the “source–sink” theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of “source” of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km 2 in 2008, and the “sink” was 172.06 km 2 . The “source” of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the “sink” was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of “source” gets weaker along with the distance from the seas boundary increase, while “sink” gets stronger. -- Highlights: •We built an index to study the “source–sink” structure of NSP in a space scale. •The Index was applied in Jiulongjiang estuary and got a well result. •The study is beneficial to discern the high load area of non-point source pollution. -- “Source–Sink” Structure of non-point source nitrogen and phosphorus pollution in Jiulongjiang estuary in China was worked out by the Grid Landscape Contrast Index

  17. Observations of the Hubble Deep Field with the Infrared Space Observatory .2. Source detection and photometry

    DEFF Research Database (Denmark)

    Goldschmidt, P.; Oliver, S.J.; Serjeant, S.B.G.

    1997-01-01

    We present positions and fluxes of point sources found in the Infrared Space Observatory (ISO) images of the Hubble Deep Field (HDF) at 6.7 and 15 mu m. We have constructed algorithmically selected 'complete' flux-limited samples of 19 sources in the 15-mu m image, and seven sources in the 6.7-mu m...

  18. Time dependence of the field energy densities surrounding sources: Application to scalar mesons near point sources and to electromagnetic fields near molecules

    International Nuclear Information System (INIS)

    Persico, F.; Power, E.A.

    1987-01-01

    The time dependence of the dressing-undressing process, i.e., the acquiring or losing by a source of a boson field intensity and hence of a field energy density in its neighborhood, is considered by examining some simple soluble models. First, the loss of the virtual field is followed in time when a point source is suddenly decoupled from a neutral scalar meson field. Second, an initially bare point source acquires a virtual meson cloud as the coupling is switched on. The third example is that of an initially bare molecule interacting with the vacuum of the electromagnetic field to acquire a virtual photon cloud. In all three cases the dressing-undressing is shown to take place within an expanding sphere of radius r = ct centered at the source. At each point in space the energy density tends, for large times, to that of the ground state of the total system. Differences in the time dependence of the dressing between the massive scalar field and the massless electromagnetic field are discussed. The results are also briefly discussed in the light of Feinberg's ideas on the nature of half-dressed states in quantum field theory

  19. Solute transport with periodic input point source in one-dimensional ...

    African Journals Online (AJOL)

    JOY

    groundwater flow velocity is considered proportional to multiple of temporal function and ζ th ... One-dimensional solute transport through porous media with or without .... solute free. ... the periodic concentration at source of the boundary i.e.,. 0.

  20. A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Al-Rawabdeh

    2016-06-01

    Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images

  1. A 24 μm point source catalog of the galactic plane from Spitzer/MIPSGAL

    Energy Technology Data Exchange (ETDEWEB)

    Gutermuth, Robert A.; Heyer, Mark [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States)

    2015-02-01

    In this contribution, we describe the applied methods to construct a 24 μm based point source catalog derived from the image data of the MIPSGAL 24 μm Galactic Plane Survey and the corresponding data products. The high quality catalog product contains 933,818 sources, with a total of 1,353,228 in the full archive catalog. The source tables include positional and photometric information derived from the 24 μm images, source quality and confusion flags, and counterpart photometry from matched 2MASS, GLIMPSE, and WISE point sources. Completeness decay data cubes are constructed at 1′ angular resolution that describe the varying background levels over the MIPSGAL field and the ability to extract sources of a given magnitude from this background. The completeness decay cubes are included in the set of data products. We present the results of our efforts to verify the astrometric and photometric calibration of the catalog, and present several analyses of minor anomalies in these measurements to justify adopted mitigation strategies.

  2. Detecting the leakage source of a reservoir using isotopes.

    Science.gov (United States)

    Yi, Peng; Yang, Jing; Wang, Yongdong; Mugwanezal, Vincent de Paul; Chen, Li; Aldahan, Ala

    2018-07-01

    A good monitoring method is vital for understanding the sources of a water reservoir leakage and planning for effective restoring. Here we present a combination of several tracers ( 222 Rn, oxygen and hydrogen isotopes, anions and temperature) for identification of water leakage sources in the Pushihe pumped storage power station which is in the Liaoning province, China. The results show an average 222 Rn activity of 6843 Bq/m 3 in the leakage water, 3034 Bq/m 3 in the reservoir water, and 41,759 Bq/m 3 in the groundwater. Considering that 222 Rn activity in surface water is typically less than 5000 Bq/m 3 , the low level average 222 Rn activity in the leakage water suggests the reservoir water as the main source of water. Results of the oxygen and hydrogen isotopes show comparable ranges and values in the reservoir and the leakage water samples. However, important contribution of the groundwater (up to 36%) was present in some samples from the bottom and upper parts of the underground powerhouse, while the leakage water from some other parts indicate the reservoir water as the dominant source. The isotopic finding suggests that the reservoir water is the main source of the leakage water which is confirmed by the analysis of anions (nitrate, sulfate, and chloride) in the water samples. The combination of these tracer methods for studying dam water leakage improves the accuracy of identifying the source of leaks and provide a scientific reference for engineering solutions to ensure the dam safety. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Analytic model of the stress waves propagation in thin wall tubes, seeking the location of a harmonic point source in its surface

    International Nuclear Information System (INIS)

    Boaratti, Mario Francisco Guerra

    2006-01-01

    Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)

  4. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  5. An Analysis of Air Pollution in Makkah - a View Point of Source Identification

    Directory of Open Access Journals (Sweden)

    Turki M. Habeebullah

    2013-07-01

    Full Text Available Makkah is one of the busiest cities in Saudi Arabia and remains busy all year around, especially during the season of Hajj and the month of Ramadan when millions of people visit this city. This emphasizes the importance of clean air and of understanding the sources of various air pollutants, which is vital for the management and advanced modeling of air pollution. This study intends to identify the major sources of air pollutants in Makkah, near the Holy Mosque (Al-Haram using a graphical approach. Air pollutants considered in this study are nitrogen oxides (NOx, nitrogen dioxide (NO2, nitric oxide (NO, carbon monoxide (CO, sulphur dioxide (SO2, ozone (O3 and particulate matter with aero-dynamic diameter of 10 um or less (PM10. Polar plots, time variation plots and correlation analysis are used to analyse the data and identify the major sources of emissions. Most of the pollutants demonstrate high concentrations during the morning traffic peak hours, suggesting road traffic as the main source of emission. The main sources of pollutant emissions identified in Makkahwere road traffic, re-suspended and windblown dust and sand particles. Further investigation on detailedsource apportionment is required, which is part of the ongoing project.

  6. A THEORETICAL ANALYSIS OF KEY POINTS WHEN CHOOSING OPEN SOURCE ERP SYSTEMS

    Directory of Open Access Journals (Sweden)

    Fernando Gustavo Dos Santos Gripe

    2011-08-01

    Full Text Available The present work is aimed at presenting a theoretical analysis of the main features of Open Source ERP systems, herein identified as success technical factors, in order to contribute to the establishment of parameters to be used in decision-making processes when choosing a system which fulfills the organization´s needs. Initially, the life cycle of ERP systems is contextualized, highlighting the features of Open Source ERP systems. As a result, it was verified that, when carefully analyzed, these systems need further attention regarding issues of project continuity and maturity, structure, transparency, updating frequency, and support, all of which are inherent to the reality of this type of software. Nevertheless, advantages were observed in what concerns flexibility, costs, and non-discontinuity as benefits. The main goal is to broaden the discussion about the adoption of Open Source ERP systems.

  7. SREM - WRS system module number 3348 for calculating the removal flux due to point, line or disc sources

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1978-06-01

    The WRS Modular Programming System has been developed as a means by which programmes may be more efficiently constructed, maintained and modified. In this system a module is a self-contained unit typically composed of one or more Fortran routines, and a programme is constructed from a number of such modules. This report describes one WRS module, the function of which is to calculate the uncollided flux and first-collision source from a disc source in a slab geometry system, a line source at the centre of a cylindrical system or a point source at the centre of a spherical system. The information given in this manual is of use both to the programmer wishing to incorporate the module in a programme, and to the user of such a programme. (author)

  8. An exergame system based on force platforms and body key-point detection for balance training.

    Science.gov (United States)

    Lavarda, Marcos D; de Borba, Pedro A; Oliveira, Matheus R; Borba, Gustavo B; de Souza, Mauren A; Gamba, Humberto R

    2016-08-01

    Postural instability affects a large number of people and can compromise even simple activities of the daily routine. Therapies for balance training can strongly benefit from auxiliary devices specially designed for this purpose. In this paper, we present a system for balance training that uses the metaphor of a game, what contributes to the motivation and engagement of the patients during a treatment. Such approach is usually named exergame, in which input devices for posturographic assessment and a visual output perform the interaction with the subject. The proposed system uses two force platforms, one positioned under the feet and the other under the hip of the subject. The force platforms employ regular load cells and a microcontroller-based signal acquisition module to capture and transmit the samples to a computer. Moreover, a computer vision module performs body key-point detection, based on real time segmentation of markers attached to the subject. For the validation of the system, we conducted experiments with 20 neurologically intact volunteers during two tests: comparison of the stabilometric parameters obtained from the system with those obtained from a commercial baropodometer and the practice of several exergames. Results show that the proposed system is completely functional and can be used as a versatile tool for balance training.

  9. Detection of cut-off point for rapid automized naming test in good readers and dyslexics

    Directory of Open Access Journals (Sweden)

    Zahra Soleymani

    2014-01-01

    Full Text Available Background and Aim: Rapid automized naming test is an appropriate tool to diagnose learning disability even before teaching reading. This study aimed to detect the cut-off point of this test for good readers and dyslexics.Methods: The test has 4 parts including: objects, colors, numbers and letters. 5 items are repeated on cards randomly for 10 times. Children were asked to name items rapidly. We studied 18 dyslexic students and 18 age-matched good readers between 7 and 8 years of age at second and third grades of elementary school; they were recruited by non-randomize sampling into 2 groups: children with developmental dyslexia from learning disabilities centers with mean age of 100 months, and normal children with mean age of 107 months from general schools in Tehran. Good readers selected from the same class of dyslexics.Results: The area under the receiver operating characteristic curve was 0.849 for letter naming, 0.892 for color naming, 0.971 for number naming, 0.887 for picture naming, and 0.965 totally. The overall sensitivity and specificity was 1 and was 0.79, respectively. The highest sensitivity and specificity were related to number naming (1 and 0.90, respectively.Conclusion: Findings showed that the rapid automized naming test could diagnose good readers from dyslexics appropriately.

  10. Infrared interference patterns for new capabilities in laser end point detection

    International Nuclear Information System (INIS)

    Heason, D J; Spencer, A G

    2003-01-01

    Standard laser interferometry is used in dry etch fabrication of semiconductor and MEMS devices to measure etch depth, rate and to detect the process end point. However, many wafer materials, such as silicon are absorbing at probing wavelengths in the visible, severely limiting the amount of information that can be obtained using this technique. At infrared (IR) wavelengths around 1500 nm and above, silicon is highly transparent. In this paper we describe an instrument that can be used to monitor etch depth throughout a thru-wafer etch. The provision of this information could eliminate the requirement of an 'etch stop' layer and improve the performance of fabricated devices. We have added a further new capability by using tuneable lasers to scan through wavelengths in the near IR to generate an interference pattern. Fitting a theoretical curve to this interference pattern gives in situ measurement of film thickness. Whereas conventional interferometry would only allow etch depth to be monitored in real time, we can use a pre-etch thickness measurement to terminate the etch on a remaining thickness of film material. This paper discusses the capabilities of, and the opportunities offered by, this new technique and gives examples of applications in MEMS and waveguides

  11. End-point detection in potentiometric titration by continuous wavelet transform.

    Science.gov (United States)

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  12. Automatic detection of measurement points for non-contact vibrometer-based diagnosis of cardiac arrhythmias

    Science.gov (United States)

    Metzler, Jürgen; Kroschel, Kristian; Willersinn, Dieter

    2017-03-01

    Monitoring of the heart rhythm is the cornerstone of the diagnosis of cardiac arrhythmias. It is done by means of electrocardiography which relies on electrodes attached to the skin of the patient. We present a new system approach based on the so-called vibrocardiogram that allows an automatic non-contact registration of the heart rhythm. Because of the contactless principle, the technique offers potential application advantages in medical fields like emergency medicine (burn patient) or premature baby care where adhesive electrodes are not easily applicable. A laser-based, mobile, contactless vibrometer for on-site diagnostics that works with the principle of laser Doppler vibrometry allows the acquisition of vital functions in form of a vibrocardiogram. Preliminary clinical studies at the Klinikum Karlsruhe have shown that the region around the carotid artery and the chest region are appropriate therefore. However, the challenge is to find a suitable measurement point in these parts of the body that differs from person to person due to e. g. physiological properties of the skin. Therefore, we propose a new Microsoft Kinect-based approach. When a suitable measurement area on the appropriate parts of the body are detected by processing the Kinect data, the vibrometer is automatically aligned on an initial location within this area. Then, vibrocardiograms on different locations within this area are successively acquired until a sufficient measuring quality is achieved. This optimal location is found by exploiting the autocorrelation function.

  13. Sterile paper points as a bacterial DNA-contamination source in microbiome profiles of clinical samples

    NARCIS (Netherlands)

    van der Horst, J.; Buijs, M.J.; Laine, M.L.; Wismeijer, D.; Loos, B.G.; Crielaard, W.; Zaura, E.

    2013-01-01

    Objectives High throughput sequencing of bacterial DNA from clinical samples provides untargeted, open-ended information on the entire microbial community. The downside of this approach is the vulnerability to DNA contamination from other sources than the clinical sample. Here we describe

  14. Improving sourcing decisions in NPD projects: Monetary quantification of points of difference

    NARCIS (Netherlands)

    Wouters, Marc; Anderson, James C.; Narus, James A.; Wynstra, Finn

    2009-01-01

    During new product development (NPD), firms make critical design and sourcing decisions that determine the new product's cost, performance, competitive position, and profitability. The purchase price of materials and components for the new product provides only part of the picture for design and

  15. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    Science.gov (United States)

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  16. A Spatial and Temporal Assessment of Non-Point Groundwater Pollution Sources, Tutuila Island, American Samoa

    Science.gov (United States)

    Shuler, C. K.; El-Kadi, A. I.; Dulaiova, H.; Glenn, C. R.; Fackrell, J.

    2015-12-01

    The quality of municipal groundwater supplies on Tutuila, the main island in American Samoa, is currently in question. A high vulnerability for contamination from surface activities has been recognized, and there exists a strong need to clearly identify anthropogenic sources of pollution and quantify their influence on the aquifer. This study examines spatial relationships and time series measurements of nutrients and other tracers to identify predominant pollution sources and determine the water quality impacts of the island's diverse land uses. Elevated groundwater nitrate concentrations are correlated with areas of human development, however, the mixture of residential and agricultural land use in this unique village based agrarian setting makes specific source identification difficult using traditional geospatial analysis. Spatial variation in anthropogenic impact was assessed by linking NO3- concentrations and δ15N(NO3) from an extensive groundwater survey to land-use types within well capture zones and groundwater flow-paths developed with MODFLOW, a numerical groundwater model. Land use types were obtained from high-resolution GIS data and compared to water quality results with multiple-regression analysis to quantify the impact that different land uses have on water quality. In addition, historical water quality data and new analyses of δD and δ18O in precipitation, groundwater, and mountain-front recharge waters were used to constrain the sources and mechanisms of contamination. Our analyses indicate that groundwater nutrient levels on Tutuila are controlled primarily by residential, not agricultural activity. Also a lack of temporal variation suggests that episodic pollution events are limited to individual water sources as opposed to the entire aquifer. These results are not only valuable for water quality management on Tutuila, but also provide insight into the sustainability of groundwater supplies on other islands with similar hydrogeology and land

  17. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  18. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  19. A SITELLE view of M31's central region - I. Calibrations and radial velocity catalogue of nearly 800 emission-line point-like sources

    Science.gov (United States)

    Martin, Thomas B.; Drissen, Laurent; Melchior, Anne-Laure

    2018-01-01

    We present a detailed description of the wavelength, astrometric and photometric calibration plan for SITELLE, the imaging Fourier transform spectrometer attached to the Canada-France-Hawaii telescope, based on observations of a red (647-685 nm) data cube of the central region (11 arcmin × 11 arcmin) of M 31. The first application, presented in this paper is a radial-velocity catalogue (with uncertainties of ∼2-6 km s-1) of nearly 800 emission-line point-like sources, including ∼450 new discoveries. Most of the sources are likely planetary nebulae, although we also detect five novae (having erupted in the first eight months of 2016) and one new supernova remnant candidate.

  20. On-site meteorological instrumentation requirements to characterize diffusion from point sources: workshop report. Final report Sep 79-Sep 80

    International Nuclear Information System (INIS)

    Strimaitis, D.; Hoffnagle, G.; Bass, A.

    1981-04-01

    Results of a workshop entitled 'On-Site Meteorological Instrumentation Requirements to Characterize Diffusion from Point Sources' are summarized and reported. The workshop was sponsored by the U.S. Environmental Protection Agency in Raleigh, North Carolina, on January 15-17, 1980. Its purpose was to provide EPA with a thorough examination of the meteorological instrumentation and data collection requirements needed to characterize airborne dispersion of air contaminants from point sources and to recommend, based on an expert consensus, specific measurement technique and accuracies. Secondary purposes of the workshop were to (1) make recommendations to the National Weather Service (NWS) about collecting and archiving meteorological data that would best support air quality dispersion modeling objectives and (2) make recommendations on standardization of meteorological data reporting and quality assurance programs

  1. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  2. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  3. A rotating modulation imager for locating mid-range point sources

    International Nuclear Information System (INIS)

    Kowash, B.R.; Wehe, D.K.; Fessler, J.A.

    2009-01-01

    Rotating modulation collimators (RMC) are relatively simple indirect imaging devices that have proven useful in gamma ray astronomy (far field) and have more recently been studied for medical imaging (very near field). At the University of Michigan a RMC has been built to study the performance for homeland security applications. This research highlights the imaging performance of this system and focuses on three distinct regions in the RMC field of view that can impact the search for hidden sources. These regions are a blind zone around the axis of rotation, a two mask image zone that extends from the blind zone to the edge of the field of view, and a single mask image zone that occurs when sources fall outside the field of view of both masks. By considering the extent and impact of these zones, the size of the two mask region can be optimized for the best system performance.

  4. The Non-point Source Pollution Effects of Pesticides Based on the Survey of 340 Farmers in Chongqing City

    OpenAIRE

    YU, Lianchao; GU, Limeng; BI, Qian

    2015-01-01

    Using the survey data on 340 farmers in Chongqing City, this paper performs an empirical analysis of the factors influencing the non-point source pollution of pesticides. The results show that the older householders will apply more pesticides, which may be due to the weak physical strength and weak ability to accept the concept of advanced cultivation; the householders with high level of education will choose to use less pesticides; the pesticide application rate is negatively correlated with...

  5. Analytical formulae to calculate the solid angle subtended at an arbitrarily positioned point source by an elliptical radiation detector

    International Nuclear Information System (INIS)

    Abbas, Mahmoud I.; Hammoud, Sami; Ibrahim, Tarek; Sakr, Mohamed

    2015-01-01

    In this article, we introduce a direct analytical mathematical method for calculating the solid angle, Ω, subtended at a point by closed elliptical contours. The solid angle is required in many areas of optical and nuclear physics to estimate the flux of particle beam of radiation and to determine the activity of a radioactive source. The validity of the derived analytical expressions was successfully confirmed by the comparison with some published data (Numerical Method)

  6. Study The Validity of The Direct Mathematical Method For Calculation The Total Efficiency Using Point And Disk Sources

    International Nuclear Information System (INIS)

    Hagag, O.M.; Nafee, S.S.; Naeem, M.A.; El Khatib, A.M.

    2011-01-01

    The direct mathematical method has been developed for calculating the total efficiency of many cylindrical gamma detectors, especially HPGe and NaI detector. Different source geometries are considered (point and disk). Further into account is taken of gamma attenuation from detector window or any interfacing absorbing layer. Results are compared with published experimental data to study the validity of the direct mathematical method to calculate total efficiency for any gamma detector size.

  7. Nuclear Material Detection by One-Short-Pulse-Laser-Driven Neutron Source

    International Nuclear Information System (INIS)

    Favalli, Andrea; Aymond, F.; Bridgewater, Jon S.; Croft, Stephen; Deppert, O.; Devlin, Matthew James; Falk, Katerina; Fernandez, Juan Carlos; Gautier, Donald Cort; Gonzales, Manuel A.; Goodsell, Alison Victoria; Guler, Nevzat; Hamilton, Christopher Eric; Hegelich, Bjorn Manuel; Henzlova, Daniela; Ianakiev, Kiril Dimitrov; Iliev, Metodi; Johnson, Randall Philip; Jung, Daniel; Kleinschmidt, Annika; Koehler, Katrina Elizabeth; Pomerantz, Ishay; Roth, Markus; Santi, Peter Angelo; Shimada, Tsutomu; Swinhoe, Martyn Thomas; Taddeucci, Terry Nicholas; Wurden, Glen Anthony; Palaniyappan, Sasikumar; McCary, E.

    2015-01-01

    Covered in the PowerPoint presentation are the following areas: Motivation and requirements for active interrogation of nuclear material; laser-driven neutron source; neutron diagnostics; active interrogation of nuclear material; and, conclusions, remarks, and future works.

  8. Detecting New Pedestrian Facilities from VGI Data Sources

    Science.gov (United States)

    Zhong, S.; Xie, Z.

    2017-12-01

    Pedestrian facility (e.g. footbridge, pedestrian crossing and underground passage) information is an important basic data of location based service (LBS) for pedestrians. However, timely updating pedestrian facility information challenges due to facilities change frequently. Previous pedestrian facility information collecting and updating tasks are mainly completed by highly trained specialized persons. However, this conventional approach has several disadvantages such as high cost, long update cycle and so on. Volunteered Geographic Information (VGI) has proven efficiency to provide new, free and fast growing spatial data. Pedestrian trajectory, which can be seen as measurements of real pedestrian road, is one of the most valuable information of VGI data. Although the accuracy of the trajectories is not too high, due to the large number of measurements, an improvement of quality of the road information can be achieved. Thus, we develop a method for detecting new pedestrian facilities based on the current road network and pedestrian trajectories. Specifically, 1) by analyzing speed, distance and direction, those outliers of pedestrian trajectories are removed, 2) a road network matching algorithm is developed for eliminating redundant trajectories, and 3) a space-time cluster algorithm is adopted for detecting new walking facilities. The performance of the method is evaluated with a series of experiments conducted on a part of the road network of Heifei and a large number of real pedestrian trajectories, and verified the results by using Tencent Street map. The results show that the proposed method is able to detecting new pedestrian facilities from VGI data accurately. We believe that the proposed method provides an alternative way for general road data acquisition, and can improve the quality of LBS for pedestrians.

  9. Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods

    Directory of Open Access Journals (Sweden)

    Huan Ni

    2016-09-01

    Full Text Available This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN, and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model fitting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data.

  10. EEG recordings as a source for the detection of IRBD

    DEFF Research Database (Denmark)

    Bisgaard, Sissel; Duun-Christensen, Bolette; Kempfner, Lykke

    2015-01-01

    The purpose of this pilot study was to develop a supportive algorithm for the detection of idiopathic Rapid Eye-Movement (REM) sleep Behaviour Disorder (iRBD) from EEG recordings. iRBD is defined as REM sleep without atonia with no current sign of neurodegenerative disease, and is one...... of the earliest known biomarkers of Parkinson's Disease (PD). It is currently diagnosed by polysomnography (PSG), primarily based on EMG recordings during REM sleep. The algorithm was developed using data collected from 42 control subjects and 34 iRBD subjects. A feature was developed to represent high amplitude...

  11. The optimal on-source region size for detections with counting-type telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Klepser, Stefan

    2017-01-15

    The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ{sup 2}{sub ∞}∼2.51 times the squared PSF width σ{sup 2}{sub PSF39}. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

  12. The optimal on-source region size for detections with counting-type telescopes

    International Nuclear Information System (INIS)

    Klepser, Stefan

    2017-01-01

    The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ"2_∞∼2.51 times the squared PSF width σ"2_P_S_F_3_9. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Ga